Skip to content

Fix ZFS raw device bay mapping#2

Open
elibosley wants to merge 6 commits intomainfrom
codex/zfs-raw-device-bay-map
Open

Fix ZFS raw device bay mapping#2
elibosley wants to merge 6 commits intomainfrom
codex/zfs-raw-device-bay-map

Conversation

@elibosley
Copy link
Copy Markdown
Member

@elibosley elibosley commented Apr 15, 2026

Summary

  • map raw ZFS pool member names like sda1 and sdb1 back to Drive Map bay IDs using drivemap.json
  • suppress ZFS alias warnings when raw devices can be resolved through the drive map
  • ensure native ZFS can find generated vdev aliases at /etc/zfs/vdev_id.conf when /etc/vdev_id.conf exists
  • update README install and release links to unraid/45d-drivemap
  • update the fallback HL15 display model label to Unraid >< 45Homelab X-15
  • add a stable Canvas Model field to server_info.json so layout selection does not parse marketing/display names
  • replace the ZFS-only animation toggle with a generic Show Animations control and animate storage groups from Drive Map rows when ZFS metadata is unavailable

Tests

  • php tests/run.php
  • php tests/remote_smoke.php
  • node --check assets/45d/45drives-disks/assets/index.bbceb330.js
  • node -e parser check for Canvas Model: HomeLab-HL15 -> HomeLabHL15
  • git diff --check
  • php -l php/zfs_info.php
  • php -l scripts/45d-generate-map
  • php -l scripts/45d-generate-server-info
  • php -l tests/run.php
  • php -l tests/remote_smoke.php

Live verification

  • Deployed to root@100.103.178.74 with dev/live-deploy.sh --host root@100.103.178.74
  • Regenerated remote map with /usr/local/emhttp/plugins/45d-drivemap/scripts/45d-generate-map
  • Confirmed remote zfs_info warnings are empty and zfs_ssd_pool resolves to bay IDs
  • Confirmed remote server_info returns Model: Unraid >< 45Homelab X-15
  • Redeployed latest frontend/backend with dev/live-deploy.sh --host root@100.103.178.74 --include-assets
  • Regenerated remote server info and confirmed Canvas Model: HomeLab-HL15
  • Confirmed remote served bundle contains Show Animations, animation_disks, and reads Canvas Model

Notes

  • The /etc/zfs/vdev_id.conf symlink path is runtime behavior intended for Unraid/ZFS hosts; fixture tests cover the ZFS device mapping behavior.
  • The animation and canvas-model changes update both Vue source and the checked-in served bundle under assets/45d/45drives-disks/assets/ because local Vite rebuilds require private @45drives package auth.

Summary by CodeRabbit

  • Documentation

    • Updated stable plugin and release URLs in the setup instructions
  • New Features

    • Improved ZFS disk identification and canonical device-name handling for more accurate bay mappings
    • Automatic creation of ZFS vdev config symlink when an alias file exists
    • New storage animation modes with per-disk grouping; toggle renamed to “Show Animations”
    • Added a derived "Canvas Model" field while preserving displayed Model in generated server info
  • Tests

    • Added ZFS fixtures and integration checks for bay mappings, warnings suppression, lookup refresh, and server-info behavior
  • Chores

    • Updated fallback server model label to "Unraid >< 45Homelab X-15"

Purpose of the change:
- Map ZFS pool members reported as raw Linux devices back to Drive Map bay IDs.
- Keep the README install links pointed at the Unraid repository.

How behavior was before:
- ZFS status and iostat output using names like sda1 or sdb1 could surface raw disk keys instead of bay IDs.
- The ZFS alias warning could appear even when Drive Map already knew how to resolve those devices.
- README install links still referenced pujitm/45d-drivemap-unraid.

Why that was a problem:
- The frontend expects bay IDs such as 1-1 and 1-2 when joining ZFS data to drive slots.
- Users could see incomplete ZFS context or misleading alias warnings on Unraid hosts.
- The documented install URL did not match the current unraid/45d-drivemap repository.

What the new change accomplishes:
- Reads drivemap.json as the canonical bay lookup for ZFS disk names.
- Canonicalizes raw disk, partition, and by-path names before building zfs_disks.
- Mirrors /etc/vdev_id.conf into /etc/zfs/vdev_id.conf when safe so native ZFS vdev_id can find aliases.
- Updates README release links to unraid/45d-drivemap.

How it works:
- Adds a cached Drive Map lookup in php/zfs_info.php using dev and dev-by-path fields.
- Strips common partition suffixes before resolving raw ZFS member names.
- Suppresses alias warnings only when every non-alias device resolves to a bay ID.
- Adds raw-device ZFS fixtures and assertions to tests/run.php.
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 15, 2026

📝 Walkthrough

Walkthrough

README URLs updated; ZFS drivemap JSON loading and lookup added; disk-name canonicalization and partition-stripping introduced and used by ZFS parsers; map-generation script ensures /etc/zfs/vdev_id.conf symlink; ZFS fixtures and tests added/extended; server-info now emits a separate "Canvas Model"; front-end animation logic expanded to support grouped storage activity and unified “Show Animations” toggle.

Changes

Cohort / File(s) Summary
Documentation
README.md
Updated stable plugin/release download and release-history links from pujitm/45d-drivemap-unraid to unraid/45d-drivemap.
ZFS Processing Core (PHP)
php/zfs_info.php
Added JSON drivemap loader/lookup (zfs_load_json, zfs_drivemap_lookup), partition-strip and canonicalization helpers (zfs_strip_partition_suffix, canonical_zfs_disk_name), threaded lookup through generate_zfs_info(), and extended parse/verify signatures to accept $disk_lookup.
Map Generation Script
scripts/45d-generate-map
Added ensure_zfs_vdev_config_link($alias_file) to create /etc/zfs/vdev_id.conf symlink to /etc/vdev_id.conf when present and log success.
Server Info Script
scripts/45d-generate-server-info
Changed one hardware-profile model string to Unraid >< 45Homelab X-15 and added logic to derive and write a new Canvas Model field into generated server_info JSON.
Tests / Fixtures
tests/fixtures/zfs_raw/*, tests/fixtures/zpool_*, tests/fixtures/zfs_raw/zfs_list.txt
Added multiple zpool/zfs raw-output fixtures (zpool list/status/iostat, zfs list) representing a tank mirror and device path variants for parsing and drivemap resolution.
Tests / Runners
tests/run.php, tests/remote_smoke.php
Extended tests: validate zfs_info output mappings (bay IDs present, raw device key omitted, empty warnings), drivemap lookup refresh behavior; updated HL15 fallback expectations and added Scenario 10 verifying Canvas Model vs displayed Model.
Frontend — Assets Bundle
assets/45d/45drives-disks/assets/index.bbceb330.js
Added p5.showStorageActivity and p5.showAnimations (aliasing p5.showZfs), added storage_active/storage_group animation colors, and changed rendering flow to prefer ZFS animations or fallback to grouped storage activity.
Frontend — Vue App
vendor/.../src/App.vue
Added animationGroupKey and updateAnimationInfo to derive zfsInfo.animation_disks and zfsInfo.animation_groups from disk rows; wired into disk-info parsing.
Frontend — Canvas Component
vendor/.../src/components/CanvasSection.vue
Removed conditional rendering of the switch group, renamed label to “Show Animations”, and switched sketch selection to use Canvas Model (fallback to Model).
Frontend — Animation Module
vendor/.../src/components/zfsAnimation.js
Refactored p5.showZfs into p5.showAnimations with guard checks, added p5.showStorageActivity, and introduced storage group animation logic and colors.

Sequence Diagram

sequenceDiagram
    participant Client
    participant ZFS_Info as zfs_info.php
    participant Drivemap as Drivemap JSON
    participant ZFS_Cmd as ZFS Commands
    participant Output as JSON Response

    Client->>ZFS_Info: Request zfs_info
    ZFS_Info->>Drivemap: Load drivemap JSON (DRIVEMAP_OUTPUT_FILE/DRIVEMAP_OUTPUT_DIR)
    Drivemap-->>ZFS_Info: Disk→bay lookup
    ZFS_Info->>ZFS_Cmd: Run zpool list/status/iostat
    ZFS_Cmd-->>ZFS_Info: Raw vdev/device names (e.g., /dev/..., sda1)
    ZFS_Info->>ZFS_Info: Strip partition suffixes & canonicalize names
    ZFS_Info->>ZFS_Info: Map canonical names to bay IDs via lookup
    ZFS_Info->>Output: Build zfs_disks payload (mapped devices, warnings)
    Output-->>Client: Return JSON response
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐇 I nibble bytes beneath the tray,
I hop from sda to bay,
I strip the parts and find the names,
Link configs, fixtures, tests, and frames—
Animations dance while models claim their fame.


Important

Pre-merge checks failed

Please resolve all errors before merging. Addressing warnings is optional.

❌ Failed checks (1 error, 1 warning)

Check name Status Explanation Resolution
Title check ❌ Error The pull request title does not follow the required conventional commits format (fix:, feat:, etc.). Update the title to follow conventional commits format, e.g., 'fix: ZFS raw device bay mapping' with the proper prefix.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch codex/zfs-raw-device-bay-map
✨ Simplify code
  • Create PR with simplified code
  • Commit simplified code in branch codex/zfs-raw-device-bay-map

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 ast-grep (0.42.1)
assets/45d/45drives-disks/assets/index.bbceb330.js

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
scripts/45d-generate-map (1)

101-108: Log the reason when the ZFS config link is skipped or fails.

Right now only the success path is visible in drivemap.log. If /etc/zfs/vdev_id.conf already exists as a regular file, or symlink() fails, native ZFS will still miss the aliases and there is nothing in the log explaining why.

Also applies to: 134-136

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/45d-generate-map` around lines 101 - 108, The code silently skips or
fails to create the ZFS alias symlink; update the branch that checks
file_exists($target) && !is_link($target) to write a clear message to
drivemap.log explaining that $target exists as a regular file and the link was
skipped, and when calling `@symlink`($alias_file, $target) remove the silence
operator and on failure log an error with the returned false plus
error_get_last() (or equivalent) so the exact cause is recorded; also log when
an existing is_link($target) is unlinked (via `@unlink`) and when symlink succeeds
so all outcomes (skipped, unlinked, failed, succeeded) are visible.
php/zfs_info.php (1)

96-97: Add one raw-NVMe fixture for this new normalization branch.

The new fixtures cover sda1/sdb1, but not nvme0n1p1. A tiny NVMe case would lock in the behavior added here and keep the raw-device mapping change from becoming SATA-only by accident.

Based on learnings: Preserve legacy parity and update tests and explain intentional behavior changes in PR notes/commit messages.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@php/zfs_info.php` around lines 96 - 97, Add a unit/fixture that exercises the
new NVMe normalization branch: include an input like "nvme0n1p1" with the
expected normalized output "nvme0n1" so the regex branch
(/^(nvme[0-9]+n[0-9]+)p[0-9]+$/ used on $name) is covered; update the
test/fixtures file where sda1/sdb1 samples live to include this nvme case and
run the test to ensure the code path that returns $match[1] is exercised, and
add a short note in the PR/commit message explaining this intentional behavior
change and that legacy parity is preserved.
tests/run.php (1)

554-554: Make the warnings assertion strict to protect API contract parity.

At Line 554, empty(...) also passes when warnings is missing. Consider asserting the key exists and equals [] so schema regressions are caught.

Suggested test tightening
-assert_true(empty($zfs_raw_info['warnings']), 'raw-device zfs_info suppresses alias warning when drivemap can resolve devices');
+assert_true(array_key_exists('warnings', $zfs_raw_info), 'raw-device zfs_info includes warnings field');
+assert_equal($zfs_raw_info['warnings'], [], 'raw-device zfs_info suppresses alias warning when drivemap can resolve devices');

Based on learnings: Preserve legacy parity and update tests and explain intentional behavior changes in PR notes/commit messages.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/run.php` at line 554, The test currently uses
assert_true(empty($zfs_raw_info['warnings'])) which passes if the 'warnings' key
is missing; update the assertion to strictly enforce the API contract by
asserting that the 'warnings' key exists on $zfs_raw_info and equals an empty
array (e.g., use an equality or array-equals assertion against [] for
$zfs_raw_info['warnings']). Locate the assertion referencing $zfs_raw_info and
'warnings' and replace the loose empty() check with a strict equality/assertion
to [] so schema regressions are detected.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@php/zfs_info.php`:
- Around line 41-57: The function zfs_drivemap_lookup currently memoizes the
first result in static $lookup and never refreshes it, causing stale
drivemap.json data; change the cache to include the file mtime (or remove the
static memo entirely) and refresh when the file changes: obtain $map_path via
the existing logic, get its modification time, and only return the cached data
if the stored mtime matches the current file mtime; otherwise reload via
zfs_load_json($map_path) and update the cache (or simply drop the static cache
and always call zfs_load_json) so subsequent calls see regenerated drivemap.json
and updated rows/warnings.

---

Nitpick comments:
In `@php/zfs_info.php`:
- Around line 96-97: Add a unit/fixture that exercises the new NVMe
normalization branch: include an input like "nvme0n1p1" with the expected
normalized output "nvme0n1" so the regex branch (/^(nvme[0-9]+n[0-9]+)p[0-9]+$/
used on $name) is covered; update the test/fixtures file where sda1/sdb1 samples
live to include this nvme case and run the test to ensure the code path that
returns $match[1] is exercised, and add a short note in the PR/commit message
explaining this intentional behavior change and that legacy parity is preserved.

In `@scripts/45d-generate-map`:
- Around line 101-108: The code silently skips or fails to create the ZFS alias
symlink; update the branch that checks file_exists($target) && !is_link($target)
to write a clear message to drivemap.log explaining that $target exists as a
regular file and the link was skipped, and when calling `@symlink`($alias_file,
$target) remove the silence operator and on failure log an error with the
returned false plus error_get_last() (or equivalent) so the exact cause is
recorded; also log when an existing is_link($target) is unlinked (via `@unlink`)
and when symlink succeeds so all outcomes (skipped, unlinked, failed, succeeded)
are visible.

In `@tests/run.php`:
- Line 554: The test currently uses
assert_true(empty($zfs_raw_info['warnings'])) which passes if the 'warnings' key
is missing; update the assertion to strictly enforce the API contract by
asserting that the 'warnings' key exists on $zfs_raw_info and equals an empty
array (e.g., use an equality or array-equals assertion against [] for
$zfs_raw_info['warnings']). Locate the assertion referencing $zfs_raw_info and
'warnings' and replace the loose empty() check with a strict equality/assertion
to [] so schema regressions are detected.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7d2ebbc7-becf-4828-b6be-5ba56ecb5668

📥 Commits

Reviewing files that changed from the base of the PR and between 27ae6f9 and fabd516.

📒 Files selected for processing (10)
  • README.md
  • php/zfs_info.php
  • scripts/45d-generate-map
  • tests/fixtures/zfs_raw/zfs_list.txt
  • tests/fixtures/zfs_raw/zpool_iostat_path_tank.txt
  • tests/fixtures/zfs_raw/zpool_iostat_tank.txt
  • tests/fixtures/zfs_raw/zpool_list.txt
  • tests/fixtures/zfs_raw/zpool_status_path_tank.txt
  • tests/fixtures/zfs_raw/zpool_status_tank.txt
  • tests/run.php

Comment thread php/zfs_info.php
Purpose of the change:
- Update the inferred HL15 display model to the current Unraid and 45Homelab product name.

How behavior was before:
- Hosts detected through the MW34-SP0-00 plus 16-port HBA fallback profile emitted Model as HomeLab-HL15.

Why that was a problem:
- The UI displayed an outdated product label for the X-15 hardware.

What the new change accomplishes:
- Emits Unraid >< 45Homelab X-15 for that fallback hardware profile.
- Keeps the underlying chassis and alias style as HL15 and HOMELAB so mapping behavior is unchanged.

How it works:
- Updates the hardware profile model string in 45d-generate-server-info.
- Updates local and remote smoke test expectations for the fallback profile.
Purpose of the change:
- Ensure ZFS bay resolution sees regenerated drivemap.json data within the same PHP process.

How behavior was before:
- zfs_drivemap_lookup cached the first lookup result in a static variable and never checked the source file again.

Why that was a problem:
- If drivemap.json was regenerated after the first lookup, later ZFS parsing could keep using stale device-to-bay mappings and produce outdated disk rows or warnings.

What the new change accomplishes:
- Removes the static memoization so every lookup reads the current drivemap.json contents.

How it works:
- zfs_drivemap_lookup now builds its lookup from the resolved map path on each call.
- The test suite rewrites a map file in one PHP process and asserts the second lookup returns the updated bay.
- Purpose: rename the ZFS-specific animation control and make it available for non-ZFS storage activity.\n- Before: the switch only rendered when ZFS was installed, and the animation helper only highlighted ZFS vdev and pool relationships.\n- Problem: that made the UI look like a ZFS-only feature and hid useful activity animation behavior on systems without usable ZFS metadata.\n- New behavior: the viewer shows a generic Show Animations toggle and falls back to storage groups derived from drivemap rows.\n- How it works: App.vue builds animation_disks and animation_groups from bay IDs plus storage role, label, and filesystem type; the canvas helper preserves ZFS-specific highlighting when available and otherwise animates the selected disk plus related storage group members.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
vendor/45drives/cockpit-hardware/45drives-disks/src/components/zfsAnimation.js (1)

126-137: ⚠️ Potential issue | 🟠 Major

Guard nested vdev access before dereference to avoid runtime breakage.

showAnimations() can throw when a disk maps to a pool but vdevs/vdev_idx is missing or out of range. In that case, the function never reaches the intended fallback call and animations stop for that frame.

Suggested defensive fix
-      if (Array.isArray(zfsInfo.zpools) && zfsInfo.zfs_disks && zfsInfo.zfs_disks[cd]) {
+      if (Array.isArray(zfsInfo.zpools) && zfsInfo.zfs_disks && zfsInfo.zfs_disks[cd]) {
         //current disk is member of a zpool
         let pool_idx = zfsInfo.zpools.findIndex(
           (pool) => pool.name === zfsInfo.zfs_disks[cd].zpool_name
         );
-        if (pool_idx >= 0 && zfsInfo.zpools[pool_idx]) {
+        if (pool_idx >= 0 && zfsInfo.zpools[pool_idx]) {
+          const pool = zfsInfo.zpools[pool_idx];
+          const vdevIdx = zfsInfo.zfs_disks[cd].vdev_idx;
+          const currentVdev = pool?.vdevs?.[vdevIdx];
+          if (!Array.isArray(pool?.vdevs) || !Array.isArray(currentVdev?.disks)) {
+            return p5.showStorageActivity(cd, zfsInfo, diskLocations, y_offset);
+          }
           //we have the index of the zpool to which the current disk belongs
           p5.updateZfsAnimationState();
           //animate each disk within the same vdev as the current
-          zfsInfo.zpools[pool_idx].vdevs[
-            zfsInfo.zfs_disks[cd].vdev_idx
-          ].disks.forEach((dsk) => {
+          currentVdev.disks.forEach((dsk) => {
             //look at each disk in the same vdev as the current disk
             let dsk_idx = diskLocations.findIndex(
               (loc) => loc.BAY === dsk.name
             );
@@
-          zfsInfo.zpools[pool_idx].vdevs.forEach((vdev) => {
+          pool.vdevs.forEach((vdev) => {
+            if (!Array.isArray(vdev?.disks)) return;
             //look at each vdev in same zpool as the current disk
             vdev.disks.forEach((dsk) => {

Also applies to: 173-176, 217-217

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@vendor/45drives/cockpit-hardware/45drives-disks/src/components/zfsAnimation.js`
around lines 126 - 137, The code in showAnimations() dereferences
zfsInfo.zpools[pool_idx].vdevs[zfsInfo.zfs_disks[cd].vdev_idx] without checking
that vdevs exists and that vdev_idx is in-range, which can throw and skip the
fallback animation call; update the logic that currently calls
p5.updateZfsAnimationState() then immediately iterates over .vdevs[...].disks to
first verify zfsInfo.zpools[pool_idx].vdevs is an array and that
zfsInfo.zfs_disks[cd].vdev_idx is a valid index (>=0 and < vdevs.length), and
only then access .disks.forEach(...); if the guard fails, call the intended
fallback (p5.updateZfsAnimationState()) and skip the disk iteration; apply the
same defensive checks at the other occurrences that access .vdevs and .vdev_idx
(the blocks flagged at lines 173-176 and 217).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@assets/45d/45drives-disks/assets/index.bbceb330.js`:
- Around line 36997-37013: animationGroupKey is producing coarse fallback keys
(fs:... or role:...) that group unrelated standalone disks; remove those
fallbacks so only true shared identities (array, parity, pool:<label>,
boot:<label>) produce a non-empty key. Update the animationGroupKey function to
return "" for slots that lack a shared pool/boot/array/parity identity (i.e.,
drop the fs-type and generic role fallbacks like `fs:${fsType}` and
`role:${role}`), and ensure any grouping logic in showStorageActivity relies
only on the non-empty keys returned by animationGroupKey (or an explicit
shared-group property) so standalone disks are not animated together.
- Around line 23281-23285: The ZFS branch in p5.showAnimations can still
dereference vdevs[vdev_idx].disks when the pool/vdev tree is partially hydrated;
update the guard so before using vdevs[vdev_idx].disks you verify pool_idx >= 0,
zfsInfo.zpools[pool_idx] exists, zfsInfo.zpools[pool_idx].vdevs is an array,
vdev_idx (from finding the vdev) is >= 0 and vdevs[vdev_idx] exists and has a
disks array that contains zfsInfo.zfs_disks[cd]; if any of those checks fail,
take the generic storage animation fallback path instead of proceeding into the
ZFS-specific logic.

---

Outside diff comments:
In
`@vendor/45drives/cockpit-hardware/45drives-disks/src/components/zfsAnimation.js`:
- Around line 126-137: The code in showAnimations() dereferences
zfsInfo.zpools[pool_idx].vdevs[zfsInfo.zfs_disks[cd].vdev_idx] without checking
that vdevs exists and that vdev_idx is in-range, which can throw and skip the
fallback animation call; update the logic that currently calls
p5.updateZfsAnimationState() then immediately iterates over .vdevs[...].disks to
first verify zfsInfo.zpools[pool_idx].vdevs is an array and that
zfsInfo.zfs_disks[cd].vdev_idx is a valid index (>=0 and < vdevs.length), and
only then access .disks.forEach(...); if the guard fails, call the intended
fallback (p5.updateZfsAnimationState()) and skip the disk iteration; apply the
same defensive checks at the other occurrences that access .vdevs and .vdev_idx
(the blocks flagged at lines 173-176 and 217).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 997e2386-34b3-443b-a1a2-00cab92abe4e

📥 Commits

Reviewing files that changed from the base of the PR and between a9321c9 and 88c3320.

📒 Files selected for processing (4)
  • assets/45d/45drives-disks/assets/index.bbceb330.js
  • vendor/45drives/cockpit-hardware/45drives-disks/src/App.vue
  • vendor/45drives/cockpit-hardware/45drives-disks/src/components/CanvasSection.vue
  • vendor/45drives/cockpit-hardware/45drives-disks/src/components/zfsAnimation.js

Comment on lines +23281 to +23285
p5.showAnimations = (cd, zfsInfo, diskLocations2, y_offset = 0) => {
if (zfsInfo.zfs_installed) {
if (zfsInfo.zfs_disks && zfsInfo.zfs_disks[cd]) {
if (Array.isArray(zfsInfo.zpools) && zfsInfo.zfs_disks && zfsInfo.zfs_disks[cd]) {
let pool_idx = zfsInfo.zpools.findIndex((pool) => pool.name === zfsInfo.zfs_disks[cd].zpool_name);
if (typeof pool_idx != "undefined") {
if (pool_idx >= 0 && zfsInfo.zpools[pool_idx]) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard partial ZFS topology before taking the ZFS path.

The new fallback only runs when this branch returns false, but this guard still allows the later vdevs[vdev_idx].disks dereference to throw if the disk record exists before the pool/vdev tree is fully hydrated. In that case the generic storage animation never gets a chance to run.

Suggested fix
   p5.showAnimations = (cd, zfsInfo, diskLocations2, y_offset = 0) => {
     if (zfsInfo.zfs_installed) {
-      if (Array.isArray(zfsInfo.zpools) && zfsInfo.zfs_disks && zfsInfo.zfs_disks[cd]) {
-        let pool_idx = zfsInfo.zpools.findIndex((pool) => pool.name === zfsInfo.zfs_disks[cd].zpool_name);
-        if (pool_idx >= 0 && zfsInfo.zpools[pool_idx]) {
+      const zfsDisk = zfsInfo.zfs_disks?.[cd];
+      if (Array.isArray(zfsInfo.zpools) && zfsDisk) {
+        const pool = zfsInfo.zpools.find((candidate) => candidate.name === zfsDisk.zpool_name);
+        const vdev = pool?.vdevs?.[zfsDisk.vdev_idx];
+        if (vdev?.disks?.length) {
           p5.updateZfsAnimationState();
-          zfsInfo.zpools[pool_idx].vdevs[zfsInfo.zfs_disks[cd].vdev_idx].disks.forEach((dsk) => {
+          vdev.disks.forEach((dsk) => {
             // existing rendering logic
           });
           return true;
         }
       }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@assets/45d/45drives-disks/assets/index.bbceb330.js` around lines 23281 -
23285, The ZFS branch in p5.showAnimations can still dereference
vdevs[vdev_idx].disks when the pool/vdev tree is partially hydrated; update the
guard so before using vdevs[vdev_idx].disks you verify pool_idx >= 0,
zfsInfo.zpools[pool_idx] exists, zfsInfo.zpools[pool_idx].vdevs is an array,
vdev_idx (from finding the vdev) is >= 0 and vdevs[vdev_idx] exists and has a
disks array that contains zfsInfo.zfs_disks[cd]; if any of those checks fail,
take the generic storage animation fallback path instead of proceeding into the
ZFS-specific logic.

Comment on lines +36997 to +37013
const animationGroupKey = (slot) => {
if (!slot || !slot.occupied)
return "";
const role = slot["storage-role"] || "";
const label = slot["storage-label"] || "";
const fsType = slot["fs-type"] || "";
if (role === "array" || role === "parity")
return "unraid-array";
if (role === "pool" && label)
return `pool:${label}`;
if (role === "boot" && label)
return `boot:${label}`;
if (fsType)
return `fs:${fsType}`;
if (role)
return `role:${role}`;
return "";
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't group unrelated disks by coarse fallback keys.

When a slot has no shared pool/boot identity, this code falls back to keys like fs:xfs or role:disk. showStorageActivity() then animates every member of that group, so unrelated standalone disks can light up together just because they share a filesystem type or generic role.

Suggested fix
     const animationGroupKey = (slot) => {
       if (!slot || !slot.occupied)
         return "";
       const role = slot["storage-role"] || "";
       const label = slot["storage-label"] || "";
-      const fsType = slot["fs-type"] || "";
       if (role === "array" || role === "parity")
         return "unraid-array";
       if (role === "pool" && label)
         return `pool:${label}`;
       if (role === "boot" && label)
         return `boot:${label}`;
-      if (fsType)
-        return `fs:${fsType}`;
-      if (role)
-        return `role:${role}`;
       return "";
     };
@@
         const bayId = slot?.["bay-id"];
-        const groupKey = animationGroupKey(slot);
+        const groupKey = animationGroupKey(slot) || `disk:${bayId}`;
         if (!bayId || !groupKey)
           return;

Also applies to: 37018-37035

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@assets/45d/45drives-disks/assets/index.bbceb330.js` around lines 36997 -
37013, animationGroupKey is producing coarse fallback keys (fs:... or role:...)
that group unrelated standalone disks; remove those fallbacks so only true
shared identities (array, parity, pool:<label>, boot:<label>) produce a
non-empty key. Update the animationGroupKey function to return "" for slots that
lack a shared pool/boot/array/parity identity (i.e., drop the fs-type and
generic role fallbacks like `fs:${fsType}` and `role:${role}`), and ensure any
grouping logic in showStorageActivity relies only on the non-empty keys returned
by animationGroupKey (or an explicit shared-group property) so standalone disks
are not animated together.

- Purpose: map the Unraid >< 45Homelab X-15 model string to the existing HL15 canvas sketch.\n- Before: the server info fallback emitted the correct model label, but the frontend parser only recognized older HomeLab-HL15 style identifiers.\n- Problem: the disk viewer rendered the unsupported-model fallback even though the machine should use the HL15 layout.\n- New behavior: model strings matching 45Homelab X-15 now resolve to HomeLabHL15.\n- How it works: CanvasSection checks the X-15 marketing name before the legacy regex parser; the checked-in served bundle mirrors the source change.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
assets/45d/45drives-disks/assets/index.bbceb330.js (2)

23281-23287: ⚠️ Potential issue | 🟠 Major

Guard the ZFS path before dereferencing vdevs[vdev_idx].disks.

zfsInfo.zfs_disks[cd] being present does not guarantee the pool's vdevs tree is fully hydrated. This can still throw before the generic showStorageActivity() fallback runs, so the new non-ZFS animation path never gets a chance to recover.

Suggested fix
   p5.showAnimations = (cd, zfsInfo, diskLocations2, y_offset = 0) => {
     if (zfsInfo.zfs_installed) {
-      if (Array.isArray(zfsInfo.zpools) && zfsInfo.zfs_disks && zfsInfo.zfs_disks[cd]) {
-        let pool_idx = zfsInfo.zpools.findIndex((pool) => pool.name === zfsInfo.zfs_disks[cd].zpool_name);
-        if (pool_idx >= 0 && zfsInfo.zpools[pool_idx]) {
+      const zfsDisk = zfsInfo.zfs_disks?.[cd];
+      if (Array.isArray(zfsInfo.zpools) && zfsDisk) {
+        const pool = zfsInfo.zpools.find((candidate) => candidate.name === zfsDisk.zpool_name);
+        const vdev = pool?.vdevs?.[zfsDisk.vdev_idx];
+        if (Array.isArray(vdev?.disks) && vdev.disks.length > 0) {
           p5.updateZfsAnimationState();
-          zfsInfo.zpools[pool_idx].vdevs[zfsInfo.zfs_disks[cd].vdev_idx].disks.forEach((dsk) => {
+          vdev.disks.forEach((dsk) => {
             let dsk_idx = diskLocations2.findIndex((loc) => loc.BAY === dsk.name);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@assets/45d/45drives-disks/assets/index.bbceb330.js` around lines 23281 -
23287, The code in p5.showAnimations dereferences
zfsInfo.zpools[pool_idx].vdevs[zfsInfo.zfs_disks[cd].vdev_idx].disks without
checking that vdevs, the vdev at vdev_idx, or its disks array exist; add a guard
after locating pool_idx (e.g., check pool && pool.vdevs && pool.vdevs[vdev_idx]
&& Array.isArray(pool.vdevs[vdev_idx].disks)) and only call
p5.updateZfsAnimationState() and iterate disks if that guard passes; otherwise
call the non‑ZFS fallback p5.showStorageActivity(...) so the code doesn’t throw
when the vdev tree is not hydrated.

37000-37035: ⚠️ Potential issue | 🟠 Major

Don't group unrelated disks by filesystem type or generic role.

fs:${fsType} and role:${role} are still shared keys, so showStorageActivity() will animate unrelated standalone disks together just because they both happen to be xfs or disk.

Suggested fix
     const animationGroupKey = (slot) => {
       if (!slot || !slot.occupied)
         return "";
       const role = slot["storage-role"] || "";
       const label = slot["storage-label"] || "";
-      const fsType = slot["fs-type"] || "";
       if (role === "array" || role === "parity")
         return "unraid-array";
       if (role === "pool" && label)
         return `pool:${label}`;
       if (role === "boot" && label)
         return `boot:${label}`;
-      if (fsType)
-        return `fs:${fsType}`;
-      if (role)
-        return `role:${role}`;
       return "";
     };
@@
-        const groupKey = animationGroupKey(slot);
+        const groupKey = animationGroupKey(slot) || `disk:${bayId}`;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@assets/45d/45drives-disks/assets/index.bbceb330.js` around lines 37000 -
37035, animationGroupKey currently returns shared keys like `fs:${fsType}` and
`role:${role}`, which causes unrelated disks to be grouped; change
animationGroupKey (and adjust updateAnimationInfo usage) so that it does not
emit generic shared group keys: either return an empty string for generic
filesystem types and generic roles (e.g., when role === 'disk' or fsType is
non-unique) or produce a per-disk unique key such as `bay:${bayId}`; update
updateAnimationInfo to pass bayId into the decision (or use the bayId when
groupKey would otherwise be generic) so showStorageActivity() no longer animates
unrelated disks together.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@assets/45d/45drives-disks/assets/index.bbceb330.js`:
- Around line 36582-36586: The fast-path check for X-15 is fine but the fallback
regex assigned to testString is too narrow compared to the pattern used in
CanvasSection.vue, causing models like Studio, HL15_BEAST, STUDIO8, F16 and VM2
to miss; update the regex used in the testString assignment (the
/(Storinator|Stornado|HomeLab|Professional|Proxinator)-.../m pattern) to match
the exact alternatives and group ordering from CanvasSection.vue (so it includes
Studio/STUDIO8, HL15_BEAST, F16, VM2, etc.), keep the same capture groups so
enableString = testString ? testString[1] + testString[4] + (testString[3] ?
testString[3] : "") : "" continues to build the identifier correctly, and ensure
the updated regex is applied where modelString is tested.

---

Duplicate comments:
In `@assets/45d/45drives-disks/assets/index.bbceb330.js`:
- Around line 23281-23287: The code in p5.showAnimations dereferences
zfsInfo.zpools[pool_idx].vdevs[zfsInfo.zfs_disks[cd].vdev_idx].disks without
checking that vdevs, the vdev at vdev_idx, or its disks array exist; add a guard
after locating pool_idx (e.g., check pool && pool.vdevs && pool.vdevs[vdev_idx]
&& Array.isArray(pool.vdevs[vdev_idx].disks)) and only call
p5.updateZfsAnimationState() and iterate disks if that guard passes; otherwise
call the non‑ZFS fallback p5.showStorageActivity(...) so the code doesn’t throw
when the vdev tree is not hydrated.
- Around line 37000-37035: animationGroupKey currently returns shared keys like
`fs:${fsType}` and `role:${role}`, which causes unrelated disks to be grouped;
change animationGroupKey (and adjust updateAnimationInfo usage) so that it does
not emit generic shared group keys: either return an empty string for generic
filesystem types and generic roles (e.g., when role === 'disk' or fsType is
non-unique) or produce a per-disk unique key such as `bay:${bayId}`; update
updateAnimationInfo to pass bayId into the decision (or use the bayId when
groupKey would otherwise be generic) so showStorageActivity() no longer animates
unrelated disks together.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3024ff3b-2649-4edd-a7d3-883d520bfd46

📥 Commits

Reviewing files that changed from the base of the PR and between 88c3320 and 073675e.

📒 Files selected for processing (2)
  • assets/45d/45drives-disks/assets/index.bbceb330.js
  • vendor/45drives/cockpit-hardware/45drives-disks/src/components/CanvasSection.vue
🚧 Files skipped from review as they are similar to previous changes (1)
  • vendor/45drives/cockpit-hardware/45drives-disks/src/components/CanvasSection.vue

Comment thread assets/45d/45drives-disks/assets/index.bbceb330.js Outdated
- Purpose: move model-to-layout selection onto a stable backend contract instead of a frontend marketing-name alias.\n- Before: the frontend parsed the display Model string directly and had a hardcoded X-15 regex fallback.\n- Problem: display names can change for branding, which made the canvas layout selection brittle and mixed presentation text with machine routing.\n- New behavior: server_info.json includes Canvas Model while preserving the pretty Model value for display.\n- How it works: generated and copied server info now receive a canonical Canvas Model derived from alias style and chassis; CanvasSection reads Canvas Model first and falls back to Model for older data.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
assets/45d/45drives-disks/assets/index.bbceb330.js (2)

23283-23288: ⚠️ Potential issue | 🟠 Major

Guard partial ZFS topology before dereferencing vdevs[vdev_idx].disks.

Line 23287 can still throw when vdevs or the selected vdev_idx is missing during partial hydration, which prevents falling back to showStorageActivity().

Suggested fix
-      if (Array.isArray(zfsInfo.zpools) && zfsInfo.zfs_disks && zfsInfo.zfs_disks[cd]) {
-        let pool_idx = zfsInfo.zpools.findIndex((pool) => pool.name === zfsInfo.zfs_disks[cd].zpool_name);
-        if (pool_idx >= 0 && zfsInfo.zpools[pool_idx]) {
+      const zfsDisk = zfsInfo.zfs_disks?.[cd];
+      if (Array.isArray(zfsInfo.zpools) && zfsDisk) {
+        const pool = zfsInfo.zpools.find((candidate) => candidate.name === zfsDisk.zpool_name);
+        const vdev = pool?.vdevs?.[zfsDisk.vdev_idx];
+        if (vdev?.disks?.length) {
           p5.updateZfsAnimationState();
-          zfsInfo.zpools[pool_idx].vdevs[zfsInfo.zfs_disks[cd].vdev_idx].disks.forEach((dsk) => {
+          vdev.disks.forEach((dsk) => {
             let dsk_idx = diskLocations2.findIndex((loc) => loc.BAY === dsk.name);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@assets/45d/45drives-disks/assets/index.bbceb330.js` around lines 23283 -
23288, The code dereferences
zfsInfo.zpools[pool_idx].vdevs[zfsInfo.zfs_disks[cd].vdev_idx].disks without
guarding for missing vdevs or a missing vdev index; update the block that uses
pool_idx and vdev_idx to first verify zfsInfo.zpools[pool_idx].vdevs exists and
that zfsInfo.zfs_disks[cd].vdev_idx is a valid index (and that
vdevs[vdev_idx].disks exists) before iterating, and if any of those checks fail
call showStorageActivity() (and avoid calling p5.updateZfsAnimationState() on an
incomplete topology) so partial hydration cannot throw.

36998-37014: ⚠️ Potential issue | 🟠 Major

Avoid coarse fallback grouping (fs:*, role:*) for unrelated disks.

Line 37010-Line 37013 still groups by filesystem/role, so unrelated standalone disks can animate together. This was previously flagged and is still present.

Suggested fix
     const animationGroupKey = (slot) => {
       if (!slot || !slot.occupied)
         return "";
       const role = slot["storage-role"] || "";
       const label = slot["storage-label"] || "";
-      const fsType = slot["fs-type"] || "";
       if (role === "array" || role === "parity")
         return "unraid-array";
       if (role === "pool" && label)
         return `pool:${label}`;
       if (role === "boot" && label)
         return `boot:${label}`;
-      if (fsType)
-        return `fs:${fsType}`;
-      if (role)
-        return `role:${role}`;
       return "";
     };
@@
-        const groupKey = animationGroupKey(slot);
+        const groupKey = animationGroupKey(slot) || `disk:${bayId}`;
         if (!bayId || !groupKey)
           return;

Also applies to: 37021-37023

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@assets/45d/45drives-disks/assets/index.bbceb330.js` around lines 36998 -
37014, animationGroupKey currently falls back to coarse grouping strings
(`fs:${fsType}` and `role:${role}`) which causes unrelated standalone disks to
animate together; update the animationGroupKey function to stop returning `fs:*`
or `role:*` as fallbacks and instead return a more specific identifier (e.g., a
unique slot identifier like slot.id/slot.serial) or an empty string so
standalone disks are not grouped, and apply the same change to the other
identical grouping occurrence elsewhere in the file.
🧹 Nitpick comments (2)
vendor/45drives/cockpit-hardware/45drives-disks/src/components/CanvasSection.vue (1)

157-158: Empty watcher appears to be dead code.

Line 158 contains watch(enableZfsAnimations,()=>{}); which does nothing. If this is intended to force Vue reactivity tracking, consider adding a comment explaining the purpose. Otherwise, remove it.

♻️ Suggested fix
     const canvasCardBody = ref();
     const activeSketchStr = enableSketch(canvasModel.value);
-    watch(enableZfsAnimations,()=>{});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@vendor/45drives/cockpit-hardware/45drives-disks/src/components/CanvasSection.vue`
around lines 157 - 158, The empty watcher watch(enableZfsAnimations, ()=>{}) is
dead code; either remove it or make its intent explicit for future readers.
Locate the watcher call using the enableZfsAnimations symbol in
CanvasSection.vue and either delete the watch(...) line if it serves no purpose,
or replace it with a no-op body that includes a short comment (e.g., "// ensure
reactivity for enableZfsAnimations" ) so the reason for leaving an otherwise
empty watcher is clear to maintainers.
tests/run.php (1)

556-573: Test assumes zfs_drivemap_lookup() does not cache results.

This lookup refresh test relies on zfs_drivemap_lookup() re-reading the JSON file on each call. Based on the context snippet, the function reads from file without caching, so this test should work correctly. However, if caching is ever added to zfs_drivemap_lookup(), this test will fail silently with a false positive.

Consider adding a comment documenting this assumption:

📝 Suggested comment
 require_once $root . '/php/zfs_info.php';
+// Note: This test relies on zfs_drivemap_lookup() reading from disk on each call (no caching).
 $lookup_refresh_path = $ctx['out_dir'] . '/zfs_lookup_refresh.json';
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/run.php` around lines 556 - 573, Add a short comment above this refresh
test explaining that it assumes zfs_drivemap_lookup() reads the
DRIVEMAP_OUTPUT_FILE on each call (no in-process caching), so the test writes a
new JSON and expects the second call to reflect the change; mention that if
caching is added to zfs_drivemap_lookup() the test will become invalid and
should either clear/reset the cache or use a cache-bypassing mechanism.
Reference the symbols zfs_drivemap_lookup(), DRIVEMAP_OUTPUT_FILE
(lookup_refresh_path) in the comment so future maintainers know what to update
if caching is introduced.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@assets/45d/45drives-disks/assets/index.bbceb330.js`:
- Around line 23283-23288: The code dereferences
zfsInfo.zpools[pool_idx].vdevs[zfsInfo.zfs_disks[cd].vdev_idx].disks without
guarding for missing vdevs or a missing vdev index; update the block that uses
pool_idx and vdev_idx to first verify zfsInfo.zpools[pool_idx].vdevs exists and
that zfsInfo.zfs_disks[cd].vdev_idx is a valid index (and that
vdevs[vdev_idx].disks exists) before iterating, and if any of those checks fail
call showStorageActivity() (and avoid calling p5.updateZfsAnimationState() on an
incomplete topology) so partial hydration cannot throw.
- Around line 36998-37014: animationGroupKey currently falls back to coarse
grouping strings (`fs:${fsType}` and `role:${role}`) which causes unrelated
standalone disks to animate together; update the animationGroupKey function to
stop returning `fs:*` or `role:*` as fallbacks and instead return a more
specific identifier (e.g., a unique slot identifier like slot.id/slot.serial) or
an empty string so standalone disks are not grouped, and apply the same change
to the other identical grouping occurrence elsewhere in the file.

---

Nitpick comments:
In `@tests/run.php`:
- Around line 556-573: Add a short comment above this refresh test explaining
that it assumes zfs_drivemap_lookup() reads the DRIVEMAP_OUTPUT_FILE on each
call (no in-process caching), so the test writes a new JSON and expects the
second call to reflect the change; mention that if caching is added to
zfs_drivemap_lookup() the test will become invalid and should either clear/reset
the cache or use a cache-bypassing mechanism. Reference the symbols
zfs_drivemap_lookup(), DRIVEMAP_OUTPUT_FILE (lookup_refresh_path) in the comment
so future maintainers know what to update if caching is introduced.

In
`@vendor/45drives/cockpit-hardware/45drives-disks/src/components/CanvasSection.vue`:
- Around line 157-158: The empty watcher watch(enableZfsAnimations, ()=>{}) is
dead code; either remove it or make its intent explicit for future readers.
Locate the watcher call using the enableZfsAnimations symbol in
CanvasSection.vue and either delete the watch(...) line if it serves no purpose,
or replace it with a no-op body that includes a short comment (e.g., "// ensure
reactivity for enableZfsAnimations" ) so the reason for leaving an otherwise
empty watcher is clear to maintainers.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: d49e3749-6a09-4243-8a1b-61d099f3543a

📥 Commits

Reviewing files that changed from the base of the PR and between 073675e and 7908450.

📒 Files selected for processing (5)
  • assets/45d/45drives-disks/assets/index.bbceb330.js
  • scripts/45d-generate-server-info
  • tests/remote_smoke.php
  • tests/run.php
  • vendor/45drives/cockpit-hardware/45drives-disks/src/components/CanvasSection.vue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant