Conversation
Purpose of the change: - Map ZFS pool members reported as raw Linux devices back to Drive Map bay IDs. - Keep the README install links pointed at the Unraid repository. How behavior was before: - ZFS status and iostat output using names like sda1 or sdb1 could surface raw disk keys instead of bay IDs. - The ZFS alias warning could appear even when Drive Map already knew how to resolve those devices. - README install links still referenced pujitm/45d-drivemap-unraid. Why that was a problem: - The frontend expects bay IDs such as 1-1 and 1-2 when joining ZFS data to drive slots. - Users could see incomplete ZFS context or misleading alias warnings on Unraid hosts. - The documented install URL did not match the current unraid/45d-drivemap repository. What the new change accomplishes: - Reads drivemap.json as the canonical bay lookup for ZFS disk names. - Canonicalizes raw disk, partition, and by-path names before building zfs_disks. - Mirrors /etc/vdev_id.conf into /etc/zfs/vdev_id.conf when safe so native ZFS vdev_id can find aliases. - Updates README release links to unraid/45d-drivemap. How it works: - Adds a cached Drive Map lookup in php/zfs_info.php using dev and dev-by-path fields. - Strips common partition suffixes before resolving raw ZFS member names. - Suppresses alias warnings only when every non-alias device resolves to a bay ID. - Adds raw-device ZFS fixtures and assertions to tests/run.php.
📝 WalkthroughWalkthroughREADME URLs updated; ZFS drivemap JSON loading and lookup added; disk-name canonicalization and partition-stripping introduced and used by ZFS parsers; map-generation script ensures /etc/zfs/vdev_id.conf symlink; ZFS fixtures and tests added/extended; server-info now emits a separate "Canvas Model"; front-end animation logic expanded to support grouped storage activity and unified “Show Animations” toggle. Changes
Sequence DiagramsequenceDiagram
participant Client
participant ZFS_Info as zfs_info.php
participant Drivemap as Drivemap JSON
participant ZFS_Cmd as ZFS Commands
participant Output as JSON Response
Client->>ZFS_Info: Request zfs_info
ZFS_Info->>Drivemap: Load drivemap JSON (DRIVEMAP_OUTPUT_FILE/DRIVEMAP_OUTPUT_DIR)
Drivemap-->>ZFS_Info: Disk→bay lookup
ZFS_Info->>ZFS_Cmd: Run zpool list/status/iostat
ZFS_Cmd-->>ZFS_Info: Raw vdev/device names (e.g., /dev/..., sda1)
ZFS_Info->>ZFS_Info: Strip partition suffixes & canonicalize names
ZFS_Info->>ZFS_Info: Map canonical names to bay IDs via lookup
ZFS_Info->>Output: Build zfs_disks payload (mapped devices, warnings)
Output-->>Client: Return JSON response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
Important Pre-merge checks failedPlease resolve all errors before merging. Addressing warnings is optional. ❌ Failed checks (1 error, 1 warning)
✅ Passed checks (1 passed)
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
✨ Simplify code
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 ast-grep (0.42.1)assets/45d/45drives-disks/assets/index.bbceb330.jsComment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
scripts/45d-generate-map (1)
101-108: Log the reason when the ZFS config link is skipped or fails.Right now only the success path is visible in
drivemap.log. If/etc/zfs/vdev_id.confalready exists as a regular file, orsymlink()fails, native ZFS will still miss the aliases and there is nothing in the log explaining why.Also applies to: 134-136
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/45d-generate-map` around lines 101 - 108, The code silently skips or fails to create the ZFS alias symlink; update the branch that checks file_exists($target) && !is_link($target) to write a clear message to drivemap.log explaining that $target exists as a regular file and the link was skipped, and when calling `@symlink`($alias_file, $target) remove the silence operator and on failure log an error with the returned false plus error_get_last() (or equivalent) so the exact cause is recorded; also log when an existing is_link($target) is unlinked (via `@unlink`) and when symlink succeeds so all outcomes (skipped, unlinked, failed, succeeded) are visible.php/zfs_info.php (1)
96-97: Add one raw-NVMe fixture for this new normalization branch.The new fixtures cover
sda1/sdb1, but notnvme0n1p1. A tiny NVMe case would lock in the behavior added here and keep the raw-device mapping change from becoming SATA-only by accident.Based on learnings: Preserve legacy parity and update tests and explain intentional behavior changes in PR notes/commit messages.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/zfs_info.php` around lines 96 - 97, Add a unit/fixture that exercises the new NVMe normalization branch: include an input like "nvme0n1p1" with the expected normalized output "nvme0n1" so the regex branch (/^(nvme[0-9]+n[0-9]+)p[0-9]+$/ used on $name) is covered; update the test/fixtures file where sda1/sdb1 samples live to include this nvme case and run the test to ensure the code path that returns $match[1] is exercised, and add a short note in the PR/commit message explaining this intentional behavior change and that legacy parity is preserved.tests/run.php (1)
554-554: Make the warnings assertion strict to protect API contract parity.At Line 554,
empty(...)also passes whenwarningsis missing. Consider asserting the key exists and equals[]so schema regressions are caught.Suggested test tightening
-assert_true(empty($zfs_raw_info['warnings']), 'raw-device zfs_info suppresses alias warning when drivemap can resolve devices'); +assert_true(array_key_exists('warnings', $zfs_raw_info), 'raw-device zfs_info includes warnings field'); +assert_equal($zfs_raw_info['warnings'], [], 'raw-device zfs_info suppresses alias warning when drivemap can resolve devices');Based on learnings: Preserve legacy parity and update tests and explain intentional behavior changes in PR notes/commit messages.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/run.php` at line 554, The test currently uses assert_true(empty($zfs_raw_info['warnings'])) which passes if the 'warnings' key is missing; update the assertion to strictly enforce the API contract by asserting that the 'warnings' key exists on $zfs_raw_info and equals an empty array (e.g., use an equality or array-equals assertion against [] for $zfs_raw_info['warnings']). Locate the assertion referencing $zfs_raw_info and 'warnings' and replace the loose empty() check with a strict equality/assertion to [] so schema regressions are detected.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@php/zfs_info.php`:
- Around line 41-57: The function zfs_drivemap_lookup currently memoizes the
first result in static $lookup and never refreshes it, causing stale
drivemap.json data; change the cache to include the file mtime (or remove the
static memo entirely) and refresh when the file changes: obtain $map_path via
the existing logic, get its modification time, and only return the cached data
if the stored mtime matches the current file mtime; otherwise reload via
zfs_load_json($map_path) and update the cache (or simply drop the static cache
and always call zfs_load_json) so subsequent calls see regenerated drivemap.json
and updated rows/warnings.
---
Nitpick comments:
In `@php/zfs_info.php`:
- Around line 96-97: Add a unit/fixture that exercises the new NVMe
normalization branch: include an input like "nvme0n1p1" with the expected
normalized output "nvme0n1" so the regex branch (/^(nvme[0-9]+n[0-9]+)p[0-9]+$/
used on $name) is covered; update the test/fixtures file where sda1/sdb1 samples
live to include this nvme case and run the test to ensure the code path that
returns $match[1] is exercised, and add a short note in the PR/commit message
explaining this intentional behavior change and that legacy parity is preserved.
In `@scripts/45d-generate-map`:
- Around line 101-108: The code silently skips or fails to create the ZFS alias
symlink; update the branch that checks file_exists($target) && !is_link($target)
to write a clear message to drivemap.log explaining that $target exists as a
regular file and the link was skipped, and when calling `@symlink`($alias_file,
$target) remove the silence operator and on failure log an error with the
returned false plus error_get_last() (or equivalent) so the exact cause is
recorded; also log when an existing is_link($target) is unlinked (via `@unlink`)
and when symlink succeeds so all outcomes (skipped, unlinked, failed, succeeded)
are visible.
In `@tests/run.php`:
- Line 554: The test currently uses
assert_true(empty($zfs_raw_info['warnings'])) which passes if the 'warnings' key
is missing; update the assertion to strictly enforce the API contract by
asserting that the 'warnings' key exists on $zfs_raw_info and equals an empty
array (e.g., use an equality or array-equals assertion against [] for
$zfs_raw_info['warnings']). Locate the assertion referencing $zfs_raw_info and
'warnings' and replace the loose empty() check with a strict equality/assertion
to [] so schema regressions are detected.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7d2ebbc7-becf-4828-b6be-5ba56ecb5668
📒 Files selected for processing (10)
README.mdphp/zfs_info.phpscripts/45d-generate-maptests/fixtures/zfs_raw/zfs_list.txttests/fixtures/zfs_raw/zpool_iostat_path_tank.txttests/fixtures/zfs_raw/zpool_iostat_tank.txttests/fixtures/zfs_raw/zpool_list.txttests/fixtures/zfs_raw/zpool_status_path_tank.txttests/fixtures/zfs_raw/zpool_status_tank.txttests/run.php
Purpose of the change: - Update the inferred HL15 display model to the current Unraid and 45Homelab product name. How behavior was before: - Hosts detected through the MW34-SP0-00 plus 16-port HBA fallback profile emitted Model as HomeLab-HL15. Why that was a problem: - The UI displayed an outdated product label for the X-15 hardware. What the new change accomplishes: - Emits Unraid >< 45Homelab X-15 for that fallback hardware profile. - Keeps the underlying chassis and alias style as HL15 and HOMELAB so mapping behavior is unchanged. How it works: - Updates the hardware profile model string in 45d-generate-server-info. - Updates local and remote smoke test expectations for the fallback profile.
Purpose of the change: - Ensure ZFS bay resolution sees regenerated drivemap.json data within the same PHP process. How behavior was before: - zfs_drivemap_lookup cached the first lookup result in a static variable and never checked the source file again. Why that was a problem: - If drivemap.json was regenerated after the first lookup, later ZFS parsing could keep using stale device-to-bay mappings and produce outdated disk rows or warnings. What the new change accomplishes: - Removes the static memoization so every lookup reads the current drivemap.json contents. How it works: - zfs_drivemap_lookup now builds its lookup from the resolved map path on each call. - The test suite rewrites a map file in one PHP process and asserts the second lookup returns the updated bay.
- Purpose: rename the ZFS-specific animation control and make it available for non-ZFS storage activity.\n- Before: the switch only rendered when ZFS was installed, and the animation helper only highlighted ZFS vdev and pool relationships.\n- Problem: that made the UI look like a ZFS-only feature and hid useful activity animation behavior on systems without usable ZFS metadata.\n- New behavior: the viewer shows a generic Show Animations toggle and falls back to storage groups derived from drivemap rows.\n- How it works: App.vue builds animation_disks and animation_groups from bay IDs plus storage role, label, and filesystem type; the canvas helper preserves ZFS-specific highlighting when available and otherwise animates the selected disk plus related storage group members.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
vendor/45drives/cockpit-hardware/45drives-disks/src/components/zfsAnimation.js (1)
126-137:⚠️ Potential issue | 🟠 MajorGuard nested vdev access before dereference to avoid runtime breakage.
showAnimations()can throw when a disk maps to a pool butvdevs/vdev_idxis missing or out of range. In that case, the function never reaches the intended fallback call and animations stop for that frame.Suggested defensive fix
- if (Array.isArray(zfsInfo.zpools) && zfsInfo.zfs_disks && zfsInfo.zfs_disks[cd]) { + if (Array.isArray(zfsInfo.zpools) && zfsInfo.zfs_disks && zfsInfo.zfs_disks[cd]) { //current disk is member of a zpool let pool_idx = zfsInfo.zpools.findIndex( (pool) => pool.name === zfsInfo.zfs_disks[cd].zpool_name ); - if (pool_idx >= 0 && zfsInfo.zpools[pool_idx]) { + if (pool_idx >= 0 && zfsInfo.zpools[pool_idx]) { + const pool = zfsInfo.zpools[pool_idx]; + const vdevIdx = zfsInfo.zfs_disks[cd].vdev_idx; + const currentVdev = pool?.vdevs?.[vdevIdx]; + if (!Array.isArray(pool?.vdevs) || !Array.isArray(currentVdev?.disks)) { + return p5.showStorageActivity(cd, zfsInfo, diskLocations, y_offset); + } //we have the index of the zpool to which the current disk belongs p5.updateZfsAnimationState(); //animate each disk within the same vdev as the current - zfsInfo.zpools[pool_idx].vdevs[ - zfsInfo.zfs_disks[cd].vdev_idx - ].disks.forEach((dsk) => { + currentVdev.disks.forEach((dsk) => { //look at each disk in the same vdev as the current disk let dsk_idx = diskLocations.findIndex( (loc) => loc.BAY === dsk.name ); @@ - zfsInfo.zpools[pool_idx].vdevs.forEach((vdev) => { + pool.vdevs.forEach((vdev) => { + if (!Array.isArray(vdev?.disks)) return; //look at each vdev in same zpool as the current disk vdev.disks.forEach((dsk) => {Also applies to: 173-176, 217-217
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vendor/45drives/cockpit-hardware/45drives-disks/src/components/zfsAnimation.js` around lines 126 - 137, The code in showAnimations() dereferences zfsInfo.zpools[pool_idx].vdevs[zfsInfo.zfs_disks[cd].vdev_idx] without checking that vdevs exists and that vdev_idx is in-range, which can throw and skip the fallback animation call; update the logic that currently calls p5.updateZfsAnimationState() then immediately iterates over .vdevs[...].disks to first verify zfsInfo.zpools[pool_idx].vdevs is an array and that zfsInfo.zfs_disks[cd].vdev_idx is a valid index (>=0 and < vdevs.length), and only then access .disks.forEach(...); if the guard fails, call the intended fallback (p5.updateZfsAnimationState()) and skip the disk iteration; apply the same defensive checks at the other occurrences that access .vdevs and .vdev_idx (the blocks flagged at lines 173-176 and 217).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@assets/45d/45drives-disks/assets/index.bbceb330.js`:
- Around line 36997-37013: animationGroupKey is producing coarse fallback keys
(fs:... or role:...) that group unrelated standalone disks; remove those
fallbacks so only true shared identities (array, parity, pool:<label>,
boot:<label>) produce a non-empty key. Update the animationGroupKey function to
return "" for slots that lack a shared pool/boot/array/parity identity (i.e.,
drop the fs-type and generic role fallbacks like `fs:${fsType}` and
`role:${role}`), and ensure any grouping logic in showStorageActivity relies
only on the non-empty keys returned by animationGroupKey (or an explicit
shared-group property) so standalone disks are not animated together.
- Around line 23281-23285: The ZFS branch in p5.showAnimations can still
dereference vdevs[vdev_idx].disks when the pool/vdev tree is partially hydrated;
update the guard so before using vdevs[vdev_idx].disks you verify pool_idx >= 0,
zfsInfo.zpools[pool_idx] exists, zfsInfo.zpools[pool_idx].vdevs is an array,
vdev_idx (from finding the vdev) is >= 0 and vdevs[vdev_idx] exists and has a
disks array that contains zfsInfo.zfs_disks[cd]; if any of those checks fail,
take the generic storage animation fallback path instead of proceeding into the
ZFS-specific logic.
---
Outside diff comments:
In
`@vendor/45drives/cockpit-hardware/45drives-disks/src/components/zfsAnimation.js`:
- Around line 126-137: The code in showAnimations() dereferences
zfsInfo.zpools[pool_idx].vdevs[zfsInfo.zfs_disks[cd].vdev_idx] without checking
that vdevs exists and that vdev_idx is in-range, which can throw and skip the
fallback animation call; update the logic that currently calls
p5.updateZfsAnimationState() then immediately iterates over .vdevs[...].disks to
first verify zfsInfo.zpools[pool_idx].vdevs is an array and that
zfsInfo.zfs_disks[cd].vdev_idx is a valid index (>=0 and < vdevs.length), and
only then access .disks.forEach(...); if the guard fails, call the intended
fallback (p5.updateZfsAnimationState()) and skip the disk iteration; apply the
same defensive checks at the other occurrences that access .vdevs and .vdev_idx
(the blocks flagged at lines 173-176 and 217).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 997e2386-34b3-443b-a1a2-00cab92abe4e
📒 Files selected for processing (4)
assets/45d/45drives-disks/assets/index.bbceb330.jsvendor/45drives/cockpit-hardware/45drives-disks/src/App.vuevendor/45drives/cockpit-hardware/45drives-disks/src/components/CanvasSection.vuevendor/45drives/cockpit-hardware/45drives-disks/src/components/zfsAnimation.js
| p5.showAnimations = (cd, zfsInfo, diskLocations2, y_offset = 0) => { | ||
| if (zfsInfo.zfs_installed) { | ||
| if (zfsInfo.zfs_disks && zfsInfo.zfs_disks[cd]) { | ||
| if (Array.isArray(zfsInfo.zpools) && zfsInfo.zfs_disks && zfsInfo.zfs_disks[cd]) { | ||
| let pool_idx = zfsInfo.zpools.findIndex((pool) => pool.name === zfsInfo.zfs_disks[cd].zpool_name); | ||
| if (typeof pool_idx != "undefined") { | ||
| if (pool_idx >= 0 && zfsInfo.zpools[pool_idx]) { |
There was a problem hiding this comment.
Guard partial ZFS topology before taking the ZFS path.
The new fallback only runs when this branch returns false, but this guard still allows the later vdevs[vdev_idx].disks dereference to throw if the disk record exists before the pool/vdev tree is fully hydrated. In that case the generic storage animation never gets a chance to run.
Suggested fix
p5.showAnimations = (cd, zfsInfo, diskLocations2, y_offset = 0) => {
if (zfsInfo.zfs_installed) {
- if (Array.isArray(zfsInfo.zpools) && zfsInfo.zfs_disks && zfsInfo.zfs_disks[cd]) {
- let pool_idx = zfsInfo.zpools.findIndex((pool) => pool.name === zfsInfo.zfs_disks[cd].zpool_name);
- if (pool_idx >= 0 && zfsInfo.zpools[pool_idx]) {
+ const zfsDisk = zfsInfo.zfs_disks?.[cd];
+ if (Array.isArray(zfsInfo.zpools) && zfsDisk) {
+ const pool = zfsInfo.zpools.find((candidate) => candidate.name === zfsDisk.zpool_name);
+ const vdev = pool?.vdevs?.[zfsDisk.vdev_idx];
+ if (vdev?.disks?.length) {
p5.updateZfsAnimationState();
- zfsInfo.zpools[pool_idx].vdevs[zfsInfo.zfs_disks[cd].vdev_idx].disks.forEach((dsk) => {
+ vdev.disks.forEach((dsk) => {
// existing rendering logic
});
return true;
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@assets/45d/45drives-disks/assets/index.bbceb330.js` around lines 23281 -
23285, The ZFS branch in p5.showAnimations can still dereference
vdevs[vdev_idx].disks when the pool/vdev tree is partially hydrated; update the
guard so before using vdevs[vdev_idx].disks you verify pool_idx >= 0,
zfsInfo.zpools[pool_idx] exists, zfsInfo.zpools[pool_idx].vdevs is an array,
vdev_idx (from finding the vdev) is >= 0 and vdevs[vdev_idx] exists and has a
disks array that contains zfsInfo.zfs_disks[cd]; if any of those checks fail,
take the generic storage animation fallback path instead of proceeding into the
ZFS-specific logic.
| const animationGroupKey = (slot) => { | ||
| if (!slot || !slot.occupied) | ||
| return ""; | ||
| const role = slot["storage-role"] || ""; | ||
| const label = slot["storage-label"] || ""; | ||
| const fsType = slot["fs-type"] || ""; | ||
| if (role === "array" || role === "parity") | ||
| return "unraid-array"; | ||
| if (role === "pool" && label) | ||
| return `pool:${label}`; | ||
| if (role === "boot" && label) | ||
| return `boot:${label}`; | ||
| if (fsType) | ||
| return `fs:${fsType}`; | ||
| if (role) | ||
| return `role:${role}`; | ||
| return ""; |
There was a problem hiding this comment.
Don't group unrelated disks by coarse fallback keys.
When a slot has no shared pool/boot identity, this code falls back to keys like fs:xfs or role:disk. showStorageActivity() then animates every member of that group, so unrelated standalone disks can light up together just because they share a filesystem type or generic role.
Suggested fix
const animationGroupKey = (slot) => {
if (!slot || !slot.occupied)
return "";
const role = slot["storage-role"] || "";
const label = slot["storage-label"] || "";
- const fsType = slot["fs-type"] || "";
if (role === "array" || role === "parity")
return "unraid-array";
if (role === "pool" && label)
return `pool:${label}`;
if (role === "boot" && label)
return `boot:${label}`;
- if (fsType)
- return `fs:${fsType}`;
- if (role)
- return `role:${role}`;
return "";
};
@@
const bayId = slot?.["bay-id"];
- const groupKey = animationGroupKey(slot);
+ const groupKey = animationGroupKey(slot) || `disk:${bayId}`;
if (!bayId || !groupKey)
return;Also applies to: 37018-37035
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@assets/45d/45drives-disks/assets/index.bbceb330.js` around lines 36997 -
37013, animationGroupKey is producing coarse fallback keys (fs:... or role:...)
that group unrelated standalone disks; remove those fallbacks so only true
shared identities (array, parity, pool:<label>, boot:<label>) produce a
non-empty key. Update the animationGroupKey function to return "" for slots that
lack a shared pool/boot/array/parity identity (i.e., drop the fs-type and
generic role fallbacks like `fs:${fsType}` and `role:${role}`), and ensure any
grouping logic in showStorageActivity relies only on the non-empty keys returned
by animationGroupKey (or an explicit shared-group property) so standalone disks
are not animated together.
- Purpose: map the Unraid >< 45Homelab X-15 model string to the existing HL15 canvas sketch.\n- Before: the server info fallback emitted the correct model label, but the frontend parser only recognized older HomeLab-HL15 style identifiers.\n- Problem: the disk viewer rendered the unsupported-model fallback even though the machine should use the HL15 layout.\n- New behavior: model strings matching 45Homelab X-15 now resolve to HomeLabHL15.\n- How it works: CanvasSection checks the X-15 marketing name before the legacy regex parser; the checked-in served bundle mirrors the source change.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
assets/45d/45drives-disks/assets/index.bbceb330.js (2)
23281-23287:⚠️ Potential issue | 🟠 MajorGuard the ZFS path before dereferencing
vdevs[vdev_idx].disks.
zfsInfo.zfs_disks[cd]being present does not guarantee the pool'svdevstree is fully hydrated. This can still throw before the genericshowStorageActivity()fallback runs, so the new non-ZFS animation path never gets a chance to recover.Suggested fix
p5.showAnimations = (cd, zfsInfo, diskLocations2, y_offset = 0) => { if (zfsInfo.zfs_installed) { - if (Array.isArray(zfsInfo.zpools) && zfsInfo.zfs_disks && zfsInfo.zfs_disks[cd]) { - let pool_idx = zfsInfo.zpools.findIndex((pool) => pool.name === zfsInfo.zfs_disks[cd].zpool_name); - if (pool_idx >= 0 && zfsInfo.zpools[pool_idx]) { + const zfsDisk = zfsInfo.zfs_disks?.[cd]; + if (Array.isArray(zfsInfo.zpools) && zfsDisk) { + const pool = zfsInfo.zpools.find((candidate) => candidate.name === zfsDisk.zpool_name); + const vdev = pool?.vdevs?.[zfsDisk.vdev_idx]; + if (Array.isArray(vdev?.disks) && vdev.disks.length > 0) { p5.updateZfsAnimationState(); - zfsInfo.zpools[pool_idx].vdevs[zfsInfo.zfs_disks[cd].vdev_idx].disks.forEach((dsk) => { + vdev.disks.forEach((dsk) => { let dsk_idx = diskLocations2.findIndex((loc) => loc.BAY === dsk.name);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@assets/45d/45drives-disks/assets/index.bbceb330.js` around lines 23281 - 23287, The code in p5.showAnimations dereferences zfsInfo.zpools[pool_idx].vdevs[zfsInfo.zfs_disks[cd].vdev_idx].disks without checking that vdevs, the vdev at vdev_idx, or its disks array exist; add a guard after locating pool_idx (e.g., check pool && pool.vdevs && pool.vdevs[vdev_idx] && Array.isArray(pool.vdevs[vdev_idx].disks)) and only call p5.updateZfsAnimationState() and iterate disks if that guard passes; otherwise call the non‑ZFS fallback p5.showStorageActivity(...) so the code doesn’t throw when the vdev tree is not hydrated.
37000-37035:⚠️ Potential issue | 🟠 MajorDon't group unrelated disks by filesystem type or generic role.
fs:${fsType}androle:${role}are still shared keys, soshowStorageActivity()will animate unrelated standalone disks together just because they both happen to bexfsordisk.Suggested fix
const animationGroupKey = (slot) => { if (!slot || !slot.occupied) return ""; const role = slot["storage-role"] || ""; const label = slot["storage-label"] || ""; - const fsType = slot["fs-type"] || ""; if (role === "array" || role === "parity") return "unraid-array"; if (role === "pool" && label) return `pool:${label}`; if (role === "boot" && label) return `boot:${label}`; - if (fsType) - return `fs:${fsType}`; - if (role) - return `role:${role}`; return ""; }; @@ - const groupKey = animationGroupKey(slot); + const groupKey = animationGroupKey(slot) || `disk:${bayId}`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@assets/45d/45drives-disks/assets/index.bbceb330.js` around lines 37000 - 37035, animationGroupKey currently returns shared keys like `fs:${fsType}` and `role:${role}`, which causes unrelated disks to be grouped; change animationGroupKey (and adjust updateAnimationInfo usage) so that it does not emit generic shared group keys: either return an empty string for generic filesystem types and generic roles (e.g., when role === 'disk' or fsType is non-unique) or produce a per-disk unique key such as `bay:${bayId}`; update updateAnimationInfo to pass bayId into the decision (or use the bayId when groupKey would otherwise be generic) so showStorageActivity() no longer animates unrelated disks together.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@assets/45d/45drives-disks/assets/index.bbceb330.js`:
- Around line 36582-36586: The fast-path check for X-15 is fine but the fallback
regex assigned to testString is too narrow compared to the pattern used in
CanvasSection.vue, causing models like Studio, HL15_BEAST, STUDIO8, F16 and VM2
to miss; update the regex used in the testString assignment (the
/(Storinator|Stornado|HomeLab|Professional|Proxinator)-.../m pattern) to match
the exact alternatives and group ordering from CanvasSection.vue (so it includes
Studio/STUDIO8, HL15_BEAST, F16, VM2, etc.), keep the same capture groups so
enableString = testString ? testString[1] + testString[4] + (testString[3] ?
testString[3] : "") : "" continues to build the identifier correctly, and ensure
the updated regex is applied where modelString is tested.
---
Duplicate comments:
In `@assets/45d/45drives-disks/assets/index.bbceb330.js`:
- Around line 23281-23287: The code in p5.showAnimations dereferences
zfsInfo.zpools[pool_idx].vdevs[zfsInfo.zfs_disks[cd].vdev_idx].disks without
checking that vdevs, the vdev at vdev_idx, or its disks array exist; add a guard
after locating pool_idx (e.g., check pool && pool.vdevs && pool.vdevs[vdev_idx]
&& Array.isArray(pool.vdevs[vdev_idx].disks)) and only call
p5.updateZfsAnimationState() and iterate disks if that guard passes; otherwise
call the non‑ZFS fallback p5.showStorageActivity(...) so the code doesn’t throw
when the vdev tree is not hydrated.
- Around line 37000-37035: animationGroupKey currently returns shared keys like
`fs:${fsType}` and `role:${role}`, which causes unrelated disks to be grouped;
change animationGroupKey (and adjust updateAnimationInfo usage) so that it does
not emit generic shared group keys: either return an empty string for generic
filesystem types and generic roles (e.g., when role === 'disk' or fsType is
non-unique) or produce a per-disk unique key such as `bay:${bayId}`; update
updateAnimationInfo to pass bayId into the decision (or use the bayId when
groupKey would otherwise be generic) so showStorageActivity() no longer animates
unrelated disks together.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3024ff3b-2649-4edd-a7d3-883d520bfd46
📒 Files selected for processing (2)
assets/45d/45drives-disks/assets/index.bbceb330.jsvendor/45drives/cockpit-hardware/45drives-disks/src/components/CanvasSection.vue
🚧 Files skipped from review as they are similar to previous changes (1)
- vendor/45drives/cockpit-hardware/45drives-disks/src/components/CanvasSection.vue
- Purpose: move model-to-layout selection onto a stable backend contract instead of a frontend marketing-name alias.\n- Before: the frontend parsed the display Model string directly and had a hardcoded X-15 regex fallback.\n- Problem: display names can change for branding, which made the canvas layout selection brittle and mixed presentation text with machine routing.\n- New behavior: server_info.json includes Canvas Model while preserving the pretty Model value for display.\n- How it works: generated and copied server info now receive a canonical Canvas Model derived from alias style and chassis; CanvasSection reads Canvas Model first and falls back to Model for older data.
There was a problem hiding this comment.
♻️ Duplicate comments (2)
assets/45d/45drives-disks/assets/index.bbceb330.js (2)
23283-23288:⚠️ Potential issue | 🟠 MajorGuard partial ZFS topology before dereferencing
vdevs[vdev_idx].disks.Line 23287 can still throw when
vdevsor the selectedvdev_idxis missing during partial hydration, which prevents falling back toshowStorageActivity().Suggested fix
- if (Array.isArray(zfsInfo.zpools) && zfsInfo.zfs_disks && zfsInfo.zfs_disks[cd]) { - let pool_idx = zfsInfo.zpools.findIndex((pool) => pool.name === zfsInfo.zfs_disks[cd].zpool_name); - if (pool_idx >= 0 && zfsInfo.zpools[pool_idx]) { + const zfsDisk = zfsInfo.zfs_disks?.[cd]; + if (Array.isArray(zfsInfo.zpools) && zfsDisk) { + const pool = zfsInfo.zpools.find((candidate) => candidate.name === zfsDisk.zpool_name); + const vdev = pool?.vdevs?.[zfsDisk.vdev_idx]; + if (vdev?.disks?.length) { p5.updateZfsAnimationState(); - zfsInfo.zpools[pool_idx].vdevs[zfsInfo.zfs_disks[cd].vdev_idx].disks.forEach((dsk) => { + vdev.disks.forEach((dsk) => { let dsk_idx = diskLocations2.findIndex((loc) => loc.BAY === dsk.name);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@assets/45d/45drives-disks/assets/index.bbceb330.js` around lines 23283 - 23288, The code dereferences zfsInfo.zpools[pool_idx].vdevs[zfsInfo.zfs_disks[cd].vdev_idx].disks without guarding for missing vdevs or a missing vdev index; update the block that uses pool_idx and vdev_idx to first verify zfsInfo.zpools[pool_idx].vdevs exists and that zfsInfo.zfs_disks[cd].vdev_idx is a valid index (and that vdevs[vdev_idx].disks exists) before iterating, and if any of those checks fail call showStorageActivity() (and avoid calling p5.updateZfsAnimationState() on an incomplete topology) so partial hydration cannot throw.
36998-37014:⚠️ Potential issue | 🟠 MajorAvoid coarse fallback grouping (
fs:*,role:*) for unrelated disks.Line 37010-Line 37013 still groups by filesystem/role, so unrelated standalone disks can animate together. This was previously flagged and is still present.
Suggested fix
const animationGroupKey = (slot) => { if (!slot || !slot.occupied) return ""; const role = slot["storage-role"] || ""; const label = slot["storage-label"] || ""; - const fsType = slot["fs-type"] || ""; if (role === "array" || role === "parity") return "unraid-array"; if (role === "pool" && label) return `pool:${label}`; if (role === "boot" && label) return `boot:${label}`; - if (fsType) - return `fs:${fsType}`; - if (role) - return `role:${role}`; return ""; }; @@ - const groupKey = animationGroupKey(slot); + const groupKey = animationGroupKey(slot) || `disk:${bayId}`; if (!bayId || !groupKey) return;Also applies to: 37021-37023
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@assets/45d/45drives-disks/assets/index.bbceb330.js` around lines 36998 - 37014, animationGroupKey currently falls back to coarse grouping strings (`fs:${fsType}` and `role:${role}`) which causes unrelated standalone disks to animate together; update the animationGroupKey function to stop returning `fs:*` or `role:*` as fallbacks and instead return a more specific identifier (e.g., a unique slot identifier like slot.id/slot.serial) or an empty string so standalone disks are not grouped, and apply the same change to the other identical grouping occurrence elsewhere in the file.
🧹 Nitpick comments (2)
vendor/45drives/cockpit-hardware/45drives-disks/src/components/CanvasSection.vue (1)
157-158: Empty watcher appears to be dead code.Line 158 contains
watch(enableZfsAnimations,()=>{});which does nothing. If this is intended to force Vue reactivity tracking, consider adding a comment explaining the purpose. Otherwise, remove it.♻️ Suggested fix
const canvasCardBody = ref(); const activeSketchStr = enableSketch(canvasModel.value); - watch(enableZfsAnimations,()=>{});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vendor/45drives/cockpit-hardware/45drives-disks/src/components/CanvasSection.vue` around lines 157 - 158, The empty watcher watch(enableZfsAnimations, ()=>{}) is dead code; either remove it or make its intent explicit for future readers. Locate the watcher call using the enableZfsAnimations symbol in CanvasSection.vue and either delete the watch(...) line if it serves no purpose, or replace it with a no-op body that includes a short comment (e.g., "// ensure reactivity for enableZfsAnimations" ) so the reason for leaving an otherwise empty watcher is clear to maintainers.tests/run.php (1)
556-573: Test assumeszfs_drivemap_lookup()does not cache results.This lookup refresh test relies on
zfs_drivemap_lookup()re-reading the JSON file on each call. Based on the context snippet, the function reads from file without caching, so this test should work correctly. However, if caching is ever added tozfs_drivemap_lookup(), this test will fail silently with a false positive.Consider adding a comment documenting this assumption:
📝 Suggested comment
require_once $root . '/php/zfs_info.php'; +// Note: This test relies on zfs_drivemap_lookup() reading from disk on each call (no caching). $lookup_refresh_path = $ctx['out_dir'] . '/zfs_lookup_refresh.json';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/run.php` around lines 556 - 573, Add a short comment above this refresh test explaining that it assumes zfs_drivemap_lookup() reads the DRIVEMAP_OUTPUT_FILE on each call (no in-process caching), so the test writes a new JSON and expects the second call to reflect the change; mention that if caching is added to zfs_drivemap_lookup() the test will become invalid and should either clear/reset the cache or use a cache-bypassing mechanism. Reference the symbols zfs_drivemap_lookup(), DRIVEMAP_OUTPUT_FILE (lookup_refresh_path) in the comment so future maintainers know what to update if caching is introduced.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@assets/45d/45drives-disks/assets/index.bbceb330.js`:
- Around line 23283-23288: The code dereferences
zfsInfo.zpools[pool_idx].vdevs[zfsInfo.zfs_disks[cd].vdev_idx].disks without
guarding for missing vdevs or a missing vdev index; update the block that uses
pool_idx and vdev_idx to first verify zfsInfo.zpools[pool_idx].vdevs exists and
that zfsInfo.zfs_disks[cd].vdev_idx is a valid index (and that
vdevs[vdev_idx].disks exists) before iterating, and if any of those checks fail
call showStorageActivity() (and avoid calling p5.updateZfsAnimationState() on an
incomplete topology) so partial hydration cannot throw.
- Around line 36998-37014: animationGroupKey currently falls back to coarse
grouping strings (`fs:${fsType}` and `role:${role}`) which causes unrelated
standalone disks to animate together; update the animationGroupKey function to
stop returning `fs:*` or `role:*` as fallbacks and instead return a more
specific identifier (e.g., a unique slot identifier like slot.id/slot.serial) or
an empty string so standalone disks are not grouped, and apply the same change
to the other identical grouping occurrence elsewhere in the file.
---
Nitpick comments:
In `@tests/run.php`:
- Around line 556-573: Add a short comment above this refresh test explaining
that it assumes zfs_drivemap_lookup() reads the DRIVEMAP_OUTPUT_FILE on each
call (no in-process caching), so the test writes a new JSON and expects the
second call to reflect the change; mention that if caching is added to
zfs_drivemap_lookup() the test will become invalid and should either clear/reset
the cache or use a cache-bypassing mechanism. Reference the symbols
zfs_drivemap_lookup(), DRIVEMAP_OUTPUT_FILE (lookup_refresh_path) in the comment
so future maintainers know what to update if caching is introduced.
In
`@vendor/45drives/cockpit-hardware/45drives-disks/src/components/CanvasSection.vue`:
- Around line 157-158: The empty watcher watch(enableZfsAnimations, ()=>{}) is
dead code; either remove it or make its intent explicit for future readers.
Locate the watcher call using the enableZfsAnimations symbol in
CanvasSection.vue and either delete the watch(...) line if it serves no purpose,
or replace it with a no-op body that includes a short comment (e.g., "// ensure
reactivity for enableZfsAnimations" ) so the reason for leaving an otherwise
empty watcher is clear to maintainers.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d49e3749-6a09-4243-8a1b-61d099f3543a
📒 Files selected for processing (5)
assets/45d/45drives-disks/assets/index.bbceb330.jsscripts/45d-generate-server-infotests/remote_smoke.phptests/run.phpvendor/45drives/cockpit-hardware/45drives-disks/src/components/CanvasSection.vue
Summary
sda1andsdb1back to Drive Map bay IDs usingdrivemap.json/etc/zfs/vdev_id.confwhen/etc/vdev_id.confexistsunraid/45d-drivemapUnraid >< 45Homelab X-15Canvas Modelfield toserver_info.jsonso layout selection does not parse marketing/display namesShow Animationscontrol and animate storage groups from Drive Map rows when ZFS metadata is unavailableTests
php tests/run.phpphp tests/remote_smoke.phpnode --check assets/45d/45drives-disks/assets/index.bbceb330.jsnode -eparser check forCanvas Model: HomeLab-HL15->HomeLabHL15git diff --checkphp -l php/zfs_info.phpphp -l scripts/45d-generate-mapphp -l scripts/45d-generate-server-infophp -l tests/run.phpphp -l tests/remote_smoke.phpLive verification
root@100.103.178.74withdev/live-deploy.sh --host root@100.103.178.74/usr/local/emhttp/plugins/45d-drivemap/scripts/45d-generate-mapzfs_infowarnings are empty andzfs_ssd_poolresolves to bay IDsserver_inforeturnsModel: Unraid >< 45Homelab X-15dev/live-deploy.sh --host root@100.103.178.74 --include-assetsCanvas Model: HomeLab-HL15Show Animations,animation_disks, and readsCanvas ModelNotes
/etc/zfs/vdev_id.confsymlink path is runtime behavior intended for Unraid/ZFS hosts; fixture tests cover the ZFS device mapping behavior.assets/45d/45drives-disks/assets/because local Vite rebuilds require private@45drivespackage auth.Summary by CodeRabbit
Documentation
New Features
Tests
Chores