Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow writing 'null' for HardwareRAIDVolumes and SoftwareRAIDVolumes. #966

Merged
merged 1 commit into from
Sep 8, 2021

Conversation

levsha
Copy link
Contributor

@levsha levsha commented Sep 5, 2021

BuildRAIDCleanSteps and saveHostProvisioningSettings rely on distinguishing between empty/missing and [].

Not allowing this causes errors like this:

Reconciler error,reconciler group:metal3.io,reconciler kind:BareMetalHost,name:sch05,namespace:cluster-sch,error:failed to save host status after "preparing": BareMetalHost.metal3.io "sch05" is invalid: status.provisioning.raid.hardwareRAIDVolumes: Invalid value: "null": status.provisioning.raid.hardwareRAIDVolumes in body must be of type array: "null",errorVerbose:BareMetalHost.metal3.io "sch05" is invalid: status.provisioning.raid.hardwareRAIDVolumes: Invalid value: "null": status.provisioning.raid.hardwareRAIDVolumes in body must be of type array: "null"
 failed to save host status after "preparing"
 github.com/metal3-io/baremetal-operator/controllers/metal3%2eio.(*BareMetalHostReconciler).Reconcile
 	/workspace/controllers/metal3.io/baremetalhost_controller.go:259
 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
 	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.0/pkg/internal/controller/controller.go:298
 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
 	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.0/pkg/internal/controller/controller.go:253
 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
 	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.0/pkg/internal/controller/controller.go:214
 runtime.goexit
 	/usr/local/go/src/runtime/asm_amd64.s:1371,stacktrace:sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
 	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.0/pkg/internal/controller/controller.go:253
 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
 	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.0/pkg/internal/controller/controller.go:214

BuildRAIDCleanSteps and saveHostProvisioningSettings rely on distinguishing between empty/missing and [].
@metal3-io-bot
Copy link
Contributor

Hi @levsha. Thanks for your PR.

I'm waiting for a metal3-io member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@metal3-io-bot metal3-io-bot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Sep 5, 2021
@s3rj1k
Copy link
Member

s3rj1k commented Sep 6, 2021

/ok-to-test

@metal3-io-bot metal3-io-bot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Sep 6, 2021
@dtantsur
Copy link
Member

dtantsur commented Sep 6, 2021

/test-integration
/lgtm

@metal3-io-bot metal3-io-bot added the lgtm Indicates that a PR is ready to be merged. label Sep 6, 2021
@@ -278,6 +278,7 @@ type RAIDConfig struct {
// The list of logical disks for hardware RAID, if rootDeviceHints isn't used, first volume is root volume.
// You can set the value of this field to `[]` to clear all the hardware RAID configurations.
// +optional
// +nullable
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we include something in the docstring which describes that you can set the value to null, and what that means?

@hardys
Copy link
Member

hardys commented Sep 8, 2021

I guess this was missed from #942 - it's unfortunate this kind of thing can't be caught via unit tests - I wonder if there's anything we can do in future to get some coverage via the integration tests.

/cc @andfasano

@zaneb
Copy link
Member

zaneb commented Sep 8, 2021

/approve

@metal3-io-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: zaneb

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@metal3-io-bot metal3-io-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 8, 2021
@metal3-io-bot metal3-io-bot merged commit a03ae04 into metal3-io:master Sep 8, 2021
@levsha levsha deleted the nullable-raid branch September 9, 2021 13:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants