OCPBUGS-84335: vSphere - Missing unit tests to cover polarion workitems#10516
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds extensive unit tests across vSphere-related packages: machines/machinesets generation, scheduler manifest generation, CNS volume deletion, and vSphere platform/machinepool/install-config validation edge cases. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 11 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (11 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
pkg/asset/manifests/scheduler_test.go (1)
18-92: Add one explicitnil-replica compute pool test case.
Scheduler.Generateonly sums replicas whenpool.Replicas != nil(pkg/asset/manifests/scheduler.go, Line 59-63). Adding a case with a compute pool whereReplicasisnilwould lock that branch explicitly.Suggested test case addition
{ name: "multiple compute pools with some replicas", installConfig: icBuild.build( func(ic *types.InstallConfig) { ic.ControlPlane = &types.MachinePool{ Replicas: ptr.To(int64(3)), } ic.Compute = []types.MachinePool{ {Replicas: ptr.To(int64(0))}, {Replicas: ptr.To(int64(2))}, } }, ), mastersSchedulable: false, }, + { + name: "compute pool with nil replicas", + installConfig: icBuild.build( + func(ic *types.InstallConfig) { + ic.ControlPlane = &types.MachinePool{ + Replicas: ptr.To(int64(3)), + } + ic.Compute = []types.MachinePool{ + {}, // Replicas=nil + } + }, + ), + mastersSchedulable: true, + }, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/asset/manifests/scheduler_test.go` around lines 18 - 92, Add a test case in the cases slice in scheduler_test.go that covers a compute pool with a nil Replicas pointer to exercise the branch in Scheduler.Generate that only sums when pool.Replicas != nil; specifically add a case named like "compute pool with nil replicas" using icBuild.build (same pattern as other cases) where ControlPlane.Replicas = 3 and Compute = []types.MachinePool{{}} (no Replicas field set) and assert mastersSchedulable is true so the nil-Replicas path is explicitly tested.pkg/types/vsphere/validation/machinepool_test.go (1)
341-374: Prefer deriving boundary values from validation constants.Hardcoded limits (
30/29,81/80) and the manual long-name literal are brittle. UsingmaxVSphereDataDisks/maxVSphereDataDiskNameLengthkeeps tests aligned with validator changes.Proposed refactor
import ( "fmt" + "strings" "testing" @@ { name: "data disk count exceeds maximum", platform: validPlatform(), pool: func() *types.MachinePool { - disks := make([]vsphere.DataDisk, 30) + disks := make([]vsphere.DataDisk, maxVSphereDataDisks+1) for i := range disks { disks[i] = vsphere.DataDisk{ Name: fmt.Sprintf("disk%d", i), SizeGiB: 10, } @@ - expectedErrMsg: `test-path.dataDisks: Invalid value: 30: data disk count must not exceed 29`, + expectedErrMsg: fmt.Sprintf(`test-path.dataDisks: Invalid value: %d: data disk count must not exceed %d`, maxVSphereDataDisks+1, maxVSphereDataDisks), }, { name: "data disk name exceeds maximum length", @@ DataDisks: []vsphere.DataDisk{ { - Name: "a12345678901234567890123456789012345678901234567890123456789012345678901234567890", + Name: strings.Repeat("a", maxVSphereDataDiskNameLength+1), SizeGiB: 10, }, }, }, }, }, - expectedErrMsg: `test-path.disks\[0].name: Invalid value: 81: data disk name must not exceed 80`, + expectedErrMsg: fmt.Sprintf(`test-path.disks\[0].name: Invalid value: %d: data disk name must not exceed %d`, maxVSphereDataDiskNameLength+1, maxVSphereDataDiskNameLength), },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/types/vsphere/validation/machinepool_test.go` around lines 341 - 374, Replace hardcoded boundary numbers and long-literal disk name in the MachinePool test with the validator constants: construct the disks slice length using maxVSphereDataDisks (e.g., maxVSphereDataDisks to trigger exceed and maxVSphereDataDisks-1 for allowed) and build the overlong name by repeating a character to maxVSphereDataDiskNameLength+1; update expectedErrMsg to derive the numeric values from those constants (use the same constants to compute the message numbers) so tests reference maxVSphereDataDisks and maxVSphereDataDiskNameLength instead of literal 30/29/81/80; locate and change the test cases that create types.MachinePool with vsphere.DataDisk and the expectedErrMsg strings to use these constants.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.idea/vcs.xml:
- Around line 1-6: The PR accidentally includes IDE metadata (.idea/vcs.xml);
remove that file from the commit and exclude all IDE artifacts by adding
".idea/" to the repository ignore list, then re-commit without the .idea
directory so future commits don't include it; locate the offending file by name
(.idea/vcs.xml) and ensure the .gitignore contains an entry for ".idea/" before
committing the change.
In `@pkg/asset/manifests/scheduler_test.go`:
- Around line 117-119: The assertion message is misleading: the format string
says "compute replicas total is %s" but passes tc.name; update the assert.Equal
call near assert.Equal(t, tc.mastersSchedulable, config.Spec.MastersSchedulable,
...) to use a message that matches the passed argument (for example
"mastersSchedulable should be %v for test case %s") or alternatively pass the
actual replicas-total field instead of tc.name so the message and parameters
align (referencing assert.Equal, tc.mastersSchedulable,
config.Spec.MastersSchedulable, and tc.name).
---
Nitpick comments:
In `@pkg/asset/manifests/scheduler_test.go`:
- Around line 18-92: Add a test case in the cases slice in scheduler_test.go
that covers a compute pool with a nil Replicas pointer to exercise the branch in
Scheduler.Generate that only sums when pool.Replicas != nil; specifically add a
case named like "compute pool with nil replicas" using icBuild.build (same
pattern as other cases) where ControlPlane.Replicas = 3 and Compute =
[]types.MachinePool{{}} (no Replicas field set) and assert mastersSchedulable is
true so the nil-Replicas path is explicitly tested.
In `@pkg/types/vsphere/validation/machinepool_test.go`:
- Around line 341-374: Replace hardcoded boundary numbers and long-literal disk
name in the MachinePool test with the validator constants: construct the disks
slice length using maxVSphereDataDisks (e.g., maxVSphereDataDisks to trigger
exceed and maxVSphereDataDisks-1 for allowed) and build the overlong name by
repeating a character to maxVSphereDataDiskNameLength+1; update expectedErrMsg
to derive the numeric values from those constants (use the same constants to
compute the message numbers) so tests reference maxVSphereDataDisks and
maxVSphereDataDiskNameLength instead of literal 30/29/81/80; locate and change
the test cases that create types.MachinePool with vsphere.DataDisk and the
expectedErrMsg strings to use these constants.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: 4239f93f-ebe3-479e-8884-d3e4afc7a40f
📒 Files selected for processing (12)
.idea/.gitignore.idea/golinter.xml.idea/installer.iml.idea/modules.xml.idea/vcs.xmlpkg/asset/machines/vsphere/machines_test.gopkg/asset/machines/vsphere/machinesets_test.gopkg/asset/manifests/scheduler_test.gopkg/destroy/vsphere/vsphere_test.gopkg/types/validation/installconfig_test.gopkg/types/vsphere/validation/machinepool_test.gopkg/types/vsphere/validation/platform_test.go
f76716a to
3d1de11
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (2)
pkg/asset/manifests/scheduler_test.go (1)
23-92: Add aReplicas: nilcompute-pool case to lock in current scheduler behavior.The matrix is strong, but it doesn’t explicitly cover the branch where compute pools exist and
Replicasis unset (nil).Scheduler.Generatecurrently treats that as zero and sets masters schedulable to true, so it’s worth pinning with a dedicated case.Proposed test-case addition
{ name: "multiple compute pools with some replicas", installConfig: icBuild.build( func(ic *types.InstallConfig) { ic.ControlPlane = &types.MachinePool{ Replicas: ptr.To(int64(3)), } ic.Compute = []types.MachinePool{ {Replicas: ptr.To(int64(0))}, {Replicas: ptr.To(int64(2))}, } }, ), mastersSchedulable: false, }, + { + name: "compute pool with nil replicas", + installConfig: icBuild.build( + func(ic *types.InstallConfig) { + ic.ControlPlane = &types.MachinePool{ + Replicas: ptr.To(int64(3)), + } + ic.Compute = []types.MachinePool{ + {Replicas: nil}, + } + }, + ), + mastersSchedulable: true, + },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/asset/manifests/scheduler_test.go` around lines 23 - 92, Add a test case that covers the branch where a compute pool exists but its MachinePool.Replicas field is nil to lock current Scheduler.Generate behavior; update the table-driven tests in pkg/asset/manifests/scheduler_test.go alongside the other cases (same icBuild.build pattern) to create an installConfig with ic.ControlPlane.Replicas set and ic.Compute containing a MachinePool with Replicas left nil, and assert mastersSchedulable is true (matching current Scheduler.Generate handling of nil as zero).pkg/asset/machines/vsphere/machinesets_test.go (1)
120-135: Avoid shared mutable fixture for the no-zones case.
MachineSetsmutatespool.Platform.VSphere.Zoneswhen it is empty, so using package-levelmachineComputePoolNoZonesby pointer can make tests order-dependent over time. Prefer a fresh per-test fixture instance.♻️ Suggested isolation pattern
-var machineComputePoolNoZones = types.MachinePool{ - ... -} +func newMachineComputePoolNoZones() types.MachinePool { + return types.MachinePool{ + Name: "worker", + Replicas: &machinePoolReplicas, + Platform: types.MachinePoolPlatform{ + VSphere: &vsphere.MachinePool{ + NumCPUs: 4, + NumCoresPerSocket: 2, + MemoryMiB: 16384, + OSDisk: vsphere.OSDisk{ + DiskSizeGB: 60, + }, + }, + }, + Hyperthreading: "true", + Architecture: types.ArchitectureAMD64, + } +}-{ - testCase: "no zones specified uses all failure domains", - machinePool: &machineComputePoolNoZones, +{ + testCase: "no zones specified uses all failure domains", + machinePool: func() *types.MachinePool { + p := newMachineComputePoolNoZones() + return &p + }(), ... },Also applies to: 217-254
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/asset/machines/vsphere/machinesets_test.go` around lines 120 - 135, The test uses a package-level mutable fixture machineComputePoolNoZones which MachineSets mutates (it writes to pool.Platform.VSphere.Zones), causing order-dependent failures; change the tests to construct a fresh MachinePool instance per test (e.g., copy or recreate machineComputePoolNoZones inside the test before calling MachineSets) so each test gets its own pool.Platform.VSphere (and Zones) rather than sharing the package-level variable; ensure all occurrences (including the other case around lines referenced) are updated to use per-test instances.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@pkg/asset/machines/vsphere/machinesets_test.go`:
- Around line 120-135: The test uses a package-level mutable fixture
machineComputePoolNoZones which MachineSets mutates (it writes to
pool.Platform.VSphere.Zones), causing order-dependent failures; change the tests
to construct a fresh MachinePool instance per test (e.g., copy or recreate
machineComputePoolNoZones inside the test before calling MachineSets) so each
test gets its own pool.Platform.VSphere (and Zones) rather than sharing the
package-level variable; ensure all occurrences (including the other case around
lines referenced) are updated to use per-test instances.
In `@pkg/asset/manifests/scheduler_test.go`:
- Around line 23-92: Add a test case that covers the branch where a compute pool
exists but its MachinePool.Replicas field is nil to lock current
Scheduler.Generate behavior; update the table-driven tests in
pkg/asset/manifests/scheduler_test.go alongside the other cases (same
icBuild.build pattern) to create an installConfig with ic.ControlPlane.Replicas
set and ic.Compute containing a MachinePool with Replicas left nil, and assert
mastersSchedulable is true (matching current Scheduler.Generate handling of nil
as zero).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: 4575e9f3-a83d-4d33-9d65-76bc3c49e360
📒 Files selected for processing (8)
.gitignorepkg/asset/machines/vsphere/machines_test.gopkg/asset/machines/vsphere/machinesets_test.gopkg/asset/manifests/scheduler_test.gopkg/destroy/vsphere/vsphere_test.gopkg/types/validation/installconfig_test.gopkg/types/vsphere/validation/machinepool_test.gopkg/types/vsphere/validation/platform_test.go
✅ Files skipped from review due to trivial changes (1)
- .gitignore
🚧 Files skipped from review as they are similar to previous changes (1)
- pkg/destroy/vsphere/vsphere_test.go
There was a problem hiding this comment.
Did you want to commit this file?
There was a problem hiding this comment.
🤷♂️ I guess we wouldn't have to worry about jetbrains ide project files. I can remove it too.
There was a problem hiding this comment.
I am fine with this 1 entry 😁 Though, ideally, IDE-specific file ignore should be defined in a global place like $HOME/.gitignore (your own settings); thus we can keep this file minimal.
3d1de11 to
338ca9e
Compare
| "github.com/openshift/installer/pkg/types" | ||
| ) | ||
|
|
||
| func TestSchedulerGenerate(t *testing.T) { |
There was a problem hiding this comment.
Maybe add
{
name: "compute pool with nil replicas",
installConfig: icBuild.build(
func(ic *types.InstallConfig) {
ic.ControlPlane = &types.MachinePool{
Replicas: ptr.To(int64(3)),
}
ic.Compute = []types.MachinePool{
{}, // Pool exists but Replicas is nil
}
},
),
There was a problem hiding this comment.
Claude's response:
⏺ That's already been addressed. We added this exact test case in the latest commit — "compute pool with nil replicas treated as zero" in scheduler_test.go. It constructs a compute pool with {Replicas: nil} and
asserts mastersSchedulable: true, which matches the current Scheduler.Generate behavior (nil replicas are skipped at line 62, keeping the total at 0).
is that what you were thinking?
There was a problem hiding this comment.
I was wondering if there was a difference between {} and {Replicas: nil}?
There was a problem hiding this comment.
I was wondering if there was a difference between {} and {Replicas: nil}?
Oh not at all. They are exactly equivalent as zero value of a pointer is a nil :D
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (3)
pkg/asset/machines/vsphere/machinesets_test.go (1)
467-473: Guard map lookups for clearer failure diagnostics.Both tests rely on implicit zero-values when a datacenter key is missing. Adding explicit key checks makes failures immediate and easier to debug.
🔎 Suggested hardening
- zoneName := dcToZone[zone] + zoneName, ok := dcToZone[zone] + if !ok { + t.Fatalf("unexpected datacenter %q for machineset %s", zone, ms.Name) + } expectedServer := expectedServers[zoneName]- expectedTemplate := expectedTemplates[dc] + expectedTemplate, ok := expectedTemplates[dc] + if !ok { + t.Fatalf("unexpected datacenter %q for machineset %s", dc, ms.Name) + }Also applies to: 548-550
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/asset/machines/vsphere/machinesets_test.go` around lines 467 - 473, The test does an unchecked map lookup (dcToZone[zone]) which can return a zero value and obscure failures; update the test to guard both lookups by checking the presence boolean: first confirm dcToZone has the key for variable zone (using the two-value form) and fail the test with a clear message if missing, then use the found zoneName to check that expectedServers contains that zoneName (again using the two-value form) and fail with a clear diagnostic if not found; apply the same guarded lookup pattern to the other instance referenced (around the expectedServer usage at lines ~548-550) and use descriptive error messages mentioning dcToZone, zoneName, and expectedServers to make failures immediate and debuggable.pkg/types/vsphere/validation/platform_test.go (1)
631-661: Anchor single-error regex expectations to reduce false positives.These patterns are currently unanchored;
assert.Regexpcan still pass if additional unexpected errors are included.🎯 Suggested tightening
- expectedError: `test-path\.failureDomains\.regionType: Required value: region type cannot be used for host group failure domains`, + expectedError: `^test-path\.failureDomains\.regionType: Required value: region type cannot be used for host group failure domains$`, @@ - expectedError: `test-path\.failureDomains\.hostGroup: Required value: must not be empty if zoneType is HostGroup`, + expectedError: `^test-path\.failureDomains\.hostGroup: Required value: must not be empty if zoneType is HostGroup$`, @@ - expectedError: `test-path\.failureDomains\.regionType: Required value: zoneType must be HostGroup`, + expectedError: `^test-path\.failureDomains\.regionType: Required value: zoneType must be HostGroup$`,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/types/vsphere/validation/platform_test.go` around lines 631 - 661, The test expectations for failure domain errors are using unanchored regexes which can match unintended additional messages; update the expectedError values used in the test cases (the table entries in platform_test.go) to anchor the regexes (e.g., prepend ^ and append $) so they match the exact message the assert.Regexp in the test asserts against; ensure every case that currently sets expectedError (used by the test harness calling assert.Regexp) is tightened to an anchored pattern for precise matching.pkg/types/vsphere/validation/machinepool_test.go (1)
359-374: Prefer deriving the over-limit name from the max-length constant.The long literal is brittle and hard to verify visually. Building it from
maxVSphereDataDiskNameLengthkeeps the test self-documenting.♻️ Optional refactor
import ( "fmt" + "strings" "testing" @@ { name: "data disk name exceeds maximum length", platform: validPlatform(), pool: &types.MachinePool{ Platform: types.MachinePoolPlatform{ VSphere: &vsphere.MachinePool{ DataDisks: []vsphere.DataDisk{ { - Name: "a12345678901234567890123456789012345678901234567890123456789012345678901234567890", + Name: strings.Repeat("a", maxVSphereDataDiskNameLength+1), SizeGiB: 10, }, }, }, }, }, - expectedErrMsg: `test-path.disks\[0].name: Invalid value: 81: data disk name must not exceed 80`, + expectedErrMsg: fmt.Sprintf(`test-path.disks\[0].name: Invalid value: %d: data disk name must not exceed %d`, maxVSphereDataDiskNameLength+1, maxVSphereDataDiskNameLength), },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/types/vsphere/validation/machinepool_test.go` around lines 359 - 374, Replace the brittle hard-coded over-length disk name and literal "80" in the test with values derived from the constant maxVSphereDataDiskNameLength: set the DataDisk Name using strings.Repeat("a", maxVSphereDataDiskNameLength+1) and build expectedErrMsg using strconv.Itoa(maxVSphereDataDiskNameLength) (or fmt.Sprintf) so the test asserts the error message contains the exact max length from the constant; update the test case that constructs types.MachinePool -> Platform -> VSphere -> DataDisks and its expectedErrMsg accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/asset/machines/vsphere/machinesets_test.go`:
- Around line 397-421: The test currently iterates over provider.Network.Devices
and can false-pass when Devices is empty; update the machinesets_test loop to
explicitly assert that provider.Network.Devices is non-empty for each machineSet
(e.g. check len(provider.Network.Devices) > 0 and call t.Errorf/t.Fatalf with a
clear message referencing machineSet.Name) before iterating, so missing NIC
generation fails the test and the subsequent per-device assertions run only when
devices exist.
In `@pkg/destroy/vsphere/vsphere_test.go`:
- Around line 670-705: The test uses global AnyTimes() expectations on
vsphereClient (GetCnsVolumes/DeleteCnsVolumes) which can mask missing
DeleteCnsVolumes calls; fix by scoping mock expectations inside each subtest and
replace AnyTimes() for DeleteCnsVolumes (and the per-case GetCnsVolumes where
relevant) with explicit Times(...) or Times(1) so each subtest asserts the
expected delete behavior; specifically update expectations for
DeleteCnsVolumes(gomock.Any(), gomock.Eq(validVolume)) and
DeleteCnsVolumes(gomock.Any(), gomock.Eq(failVolume)) to be set inside the
corresponding subtests with Times(1) (or Times(0) for negative cases), and move
GetCnsVolumes(...) returns into those subtests as well instead of using
AnyTimes() at the top level.
---
Nitpick comments:
In `@pkg/asset/machines/vsphere/machinesets_test.go`:
- Around line 467-473: The test does an unchecked map lookup (dcToZone[zone])
which can return a zero value and obscure failures; update the test to guard
both lookups by checking the presence boolean: first confirm dcToZone has the
key for variable zone (using the two-value form) and fail the test with a clear
message if missing, then use the found zoneName to check that expectedServers
contains that zoneName (again using the two-value form) and fail with a clear
diagnostic if not found; apply the same guarded lookup pattern to the other
instance referenced (around the expectedServer usage at lines ~548-550) and use
descriptive error messages mentioning dcToZone, zoneName, and expectedServers to
make failures immediate and debuggable.
In `@pkg/types/vsphere/validation/machinepool_test.go`:
- Around line 359-374: Replace the brittle hard-coded over-length disk name and
literal "80" in the test with values derived from the constant
maxVSphereDataDiskNameLength: set the DataDisk Name using strings.Repeat("a",
maxVSphereDataDiskNameLength+1) and build expectedErrMsg using
strconv.Itoa(maxVSphereDataDiskNameLength) (or fmt.Sprintf) so the test asserts
the error message contains the exact max length from the constant; update the
test case that constructs types.MachinePool -> Platform -> VSphere -> DataDisks
and its expectedErrMsg accordingly.
In `@pkg/types/vsphere/validation/platform_test.go`:
- Around line 631-661: The test expectations for failure domain errors are using
unanchored regexes which can match unintended additional messages; update the
expectedError values used in the test cases (the table entries in
platform_test.go) to anchor the regexes (e.g., prepend ^ and append $) so they
match the exact message the assert.Regexp in the test asserts against; ensure
every case that currently sets expectedError (used by the test harness calling
assert.Regexp) is tightened to an anchored pattern for precise matching.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: a05acc3c-04a7-45b5-b830-4a1748df2989
📒 Files selected for processing (8)
.gitignorepkg/asset/machines/vsphere/machines_test.gopkg/asset/machines/vsphere/machinesets_test.gopkg/asset/manifests/scheduler_test.gopkg/destroy/vsphere/vsphere_test.gopkg/types/validation/installconfig_test.gopkg/types/vsphere/validation/machinepool_test.gopkg/types/vsphere/validation/platform_test.go
✅ Files skipped from review due to trivial changes (1)
- .gitignore
🚧 Files skipped from review as they are similar to previous changes (1)
- pkg/asset/machines/vsphere/machines_test.go
338ca9e to
636bd55
Compare
|
/test golint |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
pkg/types/validation/installconfig_test.go (1)
978-1001: Harden these regex assertions to avoid overly permissive test matches.At Line 988 and Line 1000, the patterns can match unintended strings (unescaped
.and one unanchored pattern). Prefer literal matching with^\Q...\E$to keep failures precise.Proposed test assertion tightening
{ name: "vsphere cluster name with dot", installConfig: func() *types.InstallConfig { c := validInstallConfig() c.Platform = types.Platform{ VSphere: validVSpherePlatform(), } c.ObjectMeta.Name = "jima.test" return c }(), - expectedError: `^metadata\.name: Invalid value: "jima.test": cluster name must not contain '\.'$`, + expectedError: `^\Qmetadata.name: Invalid value: "jima.test": cluster name must not contain '.'\E$`, }, { name: "vsphere vCenter with special characters", installConfig: func() *types.InstallConfig { c := validInstallConfig() c.Platform = types.Platform{ VSphere: validVSpherePlatform(), } c.Platform.VSphere.VCenters[0].Server = "44&236-21-251.vmwarevmc.com" return c }(), - expectedError: `platform\.vsphere\.vcenters\[0]\.server: Invalid value: "44&236-21-251.vmwarevmc.com": must be the domain name or IP address of the vCenter`, + expectedError: `^\Qplatform.vsphere.vcenters[0].server: Invalid value: "44&236-21-251.vmwarevmc.com": must be the domain name or IP address of the vCenter\E$`, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/types/validation/installconfig_test.go` around lines 978 - 1001, The two test cases "vsphere cluster name with dot" and "vsphere vCenter with special characters" use overly permissive regexes for expectedError; update their expectedError strings to be anchored and to match literals (either escape the dots and special chars or use \Q...\E) so they become exact matches (e.g., wrap with ^...$ and escape the '.' in metadata.name and escape the vCenter server string or use \Q...\E around the full error text) ensuring the assertions only match the precise error message emitted by the validation code.pkg/asset/machines/vsphere/machines_test.go (1)
663-665: Use an independent expected zone in these assertions.These lookups key off
data.MachineFailureDomain, which is also produced byMachines(). A failure-domain assignment bug can move both the provider spec and the “expected” value together, so the test still passes. Prefer deriving the expected zone from deterministic machine order/name instead.Also applies to: 853-855, 913-915, 1058-1061
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/asset/machines/vsphere/machines_test.go` around lines 663 - 665, The assertions currently derive expected values from data.MachineFailureDomain (used by Machines()), making the test blind to bugs that change failure-domain assignments; instead compute an independent expected zone from the machine identity (e.g. deterministic ordering or machine.Name) and use that to look up tc.expectedPerZone so the expected value doesn't come from data.MachineFailureDomain. Update the test cases around the blocks referencing assignedZone := data.MachineFailureDomain[machine.Name] (and the other occurrences) to compute assignedZone deterministically from machine.Name or its index (not from data.MachineFailureDomain) before pulling expectedNets := tc.expectedPerZone[assignedZone], then assert against actual results produced by Machines().pkg/asset/machines/vsphere/machinesets_test.go (1)
468-479: Avoid deriving the expected value fromWorkspace.Datacenter.Here the expected server/VMGroup/template is selected from another field generated by the same
MachineSets()call. If the wrong failure domain is chosen, these checks can still pass becauseDatacenterand the asserted field drift together. Prefer asserting explicit expected tuples per machineset instead.Also applies to: 516-519, 552-555
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/asset/machines/vsphere/machinesets_test.go` around lines 468 - 479, The test is deriving expected values from providerSpec.Workspace.Datacenter (via dcToZone and expectedServers), which can hide errors if both fields drift; instead, hardcode explicit expected tuples per machineset and assert them directly. Update the tests around MachineSets() assertions (references: providerSpec.Workspace.Datacenter, dcToZone, expectedServers, providerSpec.Workspace.Server and similar checks at the other locations) to map each machineset index/name to a fixed expected datacenter/server/vmGroup/template tuple and assert providerSpec.Workspace.* fields against those hardcoded expected values rather than computing expected values from other generated fields.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/asset/machines/vsphere/machines_test.go`:
- Around line 587-595: The test currently iterates data.Machines without
asserting the machine count, which can false-pass if Machines() returns zero;
add an assertion before the loop to ensure data.Machines is not empty (e.g.
assert.NotEmpty/ assert.Greater on data.Machines) so the test fails if
Machines() returns zero; if the table case has an explicit expected machines
count, assert that len(data.Machines) equals that expected count instead. Ensure
you reference data.Machines (and Machines() if needed) and place the assertion
immediately before the for _, machine := range data.Machines { loop.
---
Nitpick comments:
In `@pkg/asset/machines/vsphere/machines_test.go`:
- Around line 663-665: The assertions currently derive expected values from
data.MachineFailureDomain (used by Machines()), making the test blind to bugs
that change failure-domain assignments; instead compute an independent expected
zone from the machine identity (e.g. deterministic ordering or machine.Name) and
use that to look up tc.expectedPerZone so the expected value doesn't come from
data.MachineFailureDomain. Update the test cases around the blocks referencing
assignedZone := data.MachineFailureDomain[machine.Name] (and the other
occurrences) to compute assignedZone deterministically from machine.Name or its
index (not from data.MachineFailureDomain) before pulling expectedNets :=
tc.expectedPerZone[assignedZone], then assert against actual results produced by
Machines().
In `@pkg/asset/machines/vsphere/machinesets_test.go`:
- Around line 468-479: The test is deriving expected values from
providerSpec.Workspace.Datacenter (via dcToZone and expectedServers), which can
hide errors if both fields drift; instead, hardcode explicit expected tuples per
machineset and assert them directly. Update the tests around MachineSets()
assertions (references: providerSpec.Workspace.Datacenter, dcToZone,
expectedServers, providerSpec.Workspace.Server and similar checks at the other
locations) to map each machineset index/name to a fixed expected
datacenter/server/vmGroup/template tuple and assert providerSpec.Workspace.*
fields against those hardcoded expected values rather than computing expected
values from other generated fields.
In `@pkg/types/validation/installconfig_test.go`:
- Around line 978-1001: The two test cases "vsphere cluster name with dot" and
"vsphere vCenter with special characters" use overly permissive regexes for
expectedError; update their expectedError strings to be anchored and to match
literals (either escape the dots and special chars or use \Q...\E) so they
become exact matches (e.g., wrap with ^...$ and escape the '.' in metadata.name
and escape the vCenter server string or use \Q...\E around the full error text)
ensuring the assertions only match the precise error message emitted by the
validation code.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: 36ca1516-6e18-4ee7-b5ba-eb8b9a5b40fd
📒 Files selected for processing (8)
.gitignorepkg/asset/machines/vsphere/machines_test.gopkg/asset/machines/vsphere/machinesets_test.gopkg/asset/manifests/scheduler_test.gopkg/destroy/vsphere/vsphere_test.gopkg/types/validation/installconfig_test.gopkg/types/vsphere/validation/machinepool_test.gopkg/types/vsphere/validation/platform_test.go
✅ Files skipped from review due to trivial changes (2)
- .gitignore
- pkg/asset/manifests/scheduler_test.go
🚧 Files skipped from review as they are similar to previous changes (2)
- pkg/types/vsphere/validation/machinepool_test.go
- pkg/types/vsphere/validation/platform_test.go
636bd55 to
c7ab917
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
pkg/destroy/vsphere/vsphere_test.go (1)
634-725: Optional: extract repeated subtest setup into a small helper for readability.The repeated
mockCtrl/vsphereClient/metadata/uninstallersetup is consistent but verbose; a helper would make the test easier to extend.Refactor sketch
+newTestUninstaller := func(t *testing.T, infra string, setup func(c *mock.MockAPI)) *ClusterUninstaller { + mockCtrl := gomock.NewController(t) + vsphereClient := mock.NewMockAPI(mockCtrl) + setup(vsphereClient) + metadata := newDefaultMetadata() + metadata.InfraID = infra + u := newWithClient(nullLogger, &metadata, []API{vsphereClient}) + assert.NotNil(t, u) + return u +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/destroy/vsphere/vsphere_test.go` around lines 634 - 725, Summary: Tests repeat identical gomock and uninstaller setup across subtests; extract it to a helper to improve readability. Create a helper (e.g., setupCnsTest or newCnsTestHelper) that constructs gomock.NewController(t), mock.NewMockAPI(ctrl), calls newDefaultMetadata(), optionally sets metadata.InfraID, calls newWithClient(nullLogger, &metadata, []API{vsphereClient}) and returns the controller, vsphereClient, metadata, and uninstaller; ensure callers defer ctrl.Finish() and use the returned symbols (mockCtrl, vsphereClient, metadata, uninstaller) in each t.Run instead of repeating the setup; update tests referencing deleteCnsVolumes to use the helper.pkg/asset/manifests/scheduler_test.go (1)
108-109: Make subtest case capture explicit for clarity.While Go 1.25.8 handles range variable semantics correctly, explicitly shadowing the loop variable with
tc := tcbeforet.Runimproves code clarity and is a recommended pattern for subtests.Suggested change
for _, tc := range cases { + tc := tc t.Run(tc.name, func(t *testing.T) { parents := asset.Parents{}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/asset/manifests/scheduler_test.go` around lines 108 - 109, In the table-driven test loop over cases, explicitly shadow the loop variable before launching the subtest to make the capture clear: add a local assignment like tc := tc immediately inside the for loop and before t.Run so the closure passed to t.Run references the shadowed tc; this change should be applied around the existing for _, tc := range cases { ... t.Run(...) } block to improve clarity for the tc variable used in the subtest closure.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/asset/machines/vsphere/machinesets_test.go`:
- Around line 471-477: The datacenter->zone and zone->expectedServer lookups
(dcToZone[zone], expectedServers[zoneName]) assume keys exist; add comma-ok
checks and fail the test immediately if a key is missing (e.g., check zoneName,
ok := dcToZone[zone] and expectedServer, ok := expectedServers[zoneName]) so the
test fails fast with a clear error message; apply the same pattern to the other
lookup sites referenced (the occurrences around the other test blocks using
dcToZone and expectedServers).
---
Nitpick comments:
In `@pkg/asset/manifests/scheduler_test.go`:
- Around line 108-109: In the table-driven test loop over cases, explicitly
shadow the loop variable before launching the subtest to make the capture clear:
add a local assignment like tc := tc immediately inside the for loop and before
t.Run so the closure passed to t.Run references the shadowed tc; this change
should be applied around the existing for _, tc := range cases { ... t.Run(...)
} block to improve clarity for the tc variable used in the subtest closure.
In `@pkg/destroy/vsphere/vsphere_test.go`:
- Around line 634-725: Summary: Tests repeat identical gomock and uninstaller
setup across subtests; extract it to a helper to improve readability. Create a
helper (e.g., setupCnsTest or newCnsTestHelper) that constructs
gomock.NewController(t), mock.NewMockAPI(ctrl), calls newDefaultMetadata(),
optionally sets metadata.InfraID, calls newWithClient(nullLogger, &metadata,
[]API{vsphereClient}) and returns the controller, vsphereClient, metadata, and
uninstaller; ensure callers defer ctrl.Finish() and use the returned symbols
(mockCtrl, vsphereClient, metadata, uninstaller) in each t.Run instead of
repeating the setup; update tests referencing deleteCnsVolumes to use the
helper.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: 52c73bab-8ad1-41f1-a7d6-23d4c2db1a1d
📒 Files selected for processing (8)
.gitignorepkg/asset/machines/vsphere/machines_test.gopkg/asset/machines/vsphere/machinesets_test.gopkg/asset/manifests/scheduler_test.gopkg/destroy/vsphere/vsphere_test.gopkg/types/validation/installconfig_test.gopkg/types/vsphere/validation/machinepool_test.gopkg/types/vsphere/validation/platform_test.go
✅ Files skipped from review due to trivial changes (2)
- .gitignore
- pkg/asset/machines/vsphere/machines_test.go
🚧 Files skipped from review as they are similar to previous changes (2)
- pkg/types/vsphere/validation/machinepool_test.go
- pkg/types/vsphere/validation/platform_test.go
c7ab917 to
4b41191
Compare
|
/test unit |
|
@jcpowermac: This pull request references Jira Issue OCPBUGS-84335, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
The bug has been updated to refer to the pull request using the external bug tracker. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/verified by @jcpowermac via unit test job passing |
|
@jcpowermac: This PR has been marked as verified by DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@jcpowermac: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
/cherry-pick release-4.22 |
|
@jcpowermac: once the present PR merges, I will cherry-pick it on top of DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
tthvo
left a comment
There was a problem hiding this comment.
/approve
Great! I just have a few suggestions 👍
There was a problem hiding this comment.
I am fine with this 1 entry 😁 Though, ideally, IDE-specific file ignore should be defined in a global place like $HOME/.gitignore (your own settings); thus we can keep this file minimal.
| "github.com/openshift/installer/pkg/types" | ||
| ) | ||
|
|
||
| func TestSchedulerGenerate(t *testing.T) { |
There was a problem hiding this comment.
I was wondering if there was a difference between {} and {Replicas: nil}?
Oh not at all. They are exactly equivalent as zero value of a pointer is a nil :D
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: tthvo The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Add comprehensive unit tests across vSphere platform validation, machine generation, and cluster destroy: - Machine generation: multi-NIC, multi-zone, hardware config propagation, pre-existing RHCOS template, multi-vCenter workspace server, host group and VMGroup propagation, failure domain auto-population - MachineSet generation: single zone, multizone ControlPlaneMachineSet, static IP AddressesFromPools, multi-zone template propagation - Platform validation: invalid IPI options, topology field validation, datastore cluster paths, customized folder paths, data disk count and name length, datacenter and computeCluster validation - Machine pool validation: data disk count and name length - Scheduler: mastersSchedulable for compact 3-node clusters - Cluster destroy: CNS volume deletion - DNS validation: external DNS records with load balancer Fix gofmt struct field alignment in machines_test.go and scheduler_test.go. Fix misleading assertion message in scheduler_test.go where the format string referenced "compute replicas total" but received the test case name. Fix shared mutable package-level fixture machineComputePoolNoZones whose Zones field was mutated by MachineSets(), replacing it with a per-test inline construction. Add test case for nil Replicas in compute pool to lock current behavior. Add guard assertion for empty Network.Devices in static IP test. Scope CNS volume test mock expectations per-subtest with explicit Times() calls to prevent false-pass on missing delete calls. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
4b41191 to
f2553af
Compare
|
/verified by @jcpowermac via unit test job passing |
|
@jcpowermac: This PR has been marked as verified by DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
73b340a
into
openshift:main
|
@jcpowermac: Jira Issue Verification Checks: Jira Issue OCPBUGS-84335 Jira Issue OCPBUGS-84335 has been moved to the MODIFIED state and will move to the VERIFIED state when the change is available in an accepted nightly payload. 🕓 DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@jcpowermac: new pull request created: #10526 DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
Fix included in release 5.0.0-0.nightly-2026-04-30-010124 |
Add comprehensive unit tests across vSphere platform validation,
machine generation, and cluster destroy:
pre-existing RHCOS template, multi-vCenter workspace server, host group
and VMGroup propagation, failure domain auto-population
static IP AddressesFromPools, multi-zone template propagation
datastore cluster paths, customized folder paths, data disk count
and name length, datacenter and computeCluster validation
Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com
Summary by CodeRabbit
Tests
Chores