Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Third Party S3 configuration results in runtime panic #752

Closed
nate-duke opened this issue Feb 3, 2022 · 9 comments
Closed

Third Party S3 configuration results in runtime panic #752

nate-duke opened this issue Feb 3, 2022 · 9 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@nate-duke
Copy link

nate-duke commented Feb 3, 2022

in #485 it was mentioned that any compatible S3 storage back should more-or-less work. I'm trying to get the image-registry operator deployed and using an S3-compatible appliance (EMC PowerScale in my case) and am running into the following:

❯ oc get clusterversion
NAME      VERSION                         AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.0-0.okd-2022-01-29-035536   True        False         2d5h    Cluster version is 4.9.0-0.okd-2022-01-29-03553

❯ oc get clusteroperators.config.openshift.io/image-registry
NAME             VERSION                         AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
image-registry   4.9.0-0.okd-2022-01-29-035536   True        False         False      16m     

We're not in AWS if that helps. All IPI on vSphere.

Operator configured with:

  storage:
    s3:
      bucket: okd4-image-registry
      regionEndpoint: fancy-emc-thing:9021

which results in:

    message: 'Unable to apply resources: unable to sync storage configuration: MissingRegion:

The source and documentation indicate that region is "optional" but I realize that there's a lot of 'convention' encoded into much of the tooling around s3 libraries so I humored it and fill in a region and am met with the following fun crash no matter what i put into the region attribute. S3 clients (tested with mc and s3cmd) work just fine against this thing no mater what region i set.

  storage:
    s3:
      bucket: okd4-image-registry
      region: bogus
      regionEndpoint: powerscale.example.com:9021
E0203 17:53:04.215375       1 runtime.go:78] Observed a panic: runtime.boundsError{x:3, y:2, signed:true, code:0x3} (runtime error: slice bounds out of range [3:2])
goroutine 422 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x2af0d20, 0xc000fc50b0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
panic(0x2af0d20, 0xc000fc50b0)
	/usr/lib/golang/src/runtime/panic.go:965 +0x1b9
github.com/aws/aws-sdk-go/aws/signer/v4.getURIPath(0xc000d44990, 0x0, 0x2cb7c29)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/uri_path.go:14 +0x15e
github.com/aws/aws-sdk-go/aws/signer/v4.(*signingCtx).buildCanonicalString(0xc001c06ba8)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:656 +0xd0
github.com/aws/aws-sdk-go/aws/signer/v4.(*signingCtx).build(0xc001c06ba8, 0x3203f00, 0xc000054098, 0xc001411900)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:533 +0x30d
github.com/aws/aws-sdk-go/aws/signer/v4.Signer.signWithBody(0xc000d7fd80, 0x0, 0x31a2c80, 0xc000e12088, 0x10100, 0x2e2b398, 0x0, 0xc000d10700, 0x31bbb80, 0xc0010fcb80, ...)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:350 +0x3b3
github.com/aws/aws-sdk-go/aws/signer/v4.SignSDKRequestWithCurrentTime(0xc0006d0a00, 0x2e2b398, 0xc000e120c0, 0x1, 0x1)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:481 +0x2d5
github.com/aws/aws-sdk-go/aws/signer/v4.BuildNamedHandler.func1(0xc0006d0a00)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:436 +0x52
github.com/aws/aws-sdk-go/aws/request.(*HandlerList).Run(0xc0006d0be8, 0xc0006d0a00)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/request/handlers.go:267 +0x99
github.com/aws/aws-sdk-go/aws/request.(*Request).Sign(0xc0006d0a00, 0x10c318fa79, 0x45cb5c0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/request/request.go:429 +0xd6
github.com/aws/aws-sdk-go/aws/request.(*Request).Send(0xc0006d0a00, 0x0, 0x0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/request/request.go:526 +0xe6
github.com/aws/aws-sdk-go/service/s3.(*S3).HeadBucketWithContext(0xc000e120b8, 0x3203f48, 0xc000054098, 0xc000d2d3f0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/service/s3/api.go:5355 +0xb3
github.com/openshift/cluster-image-registry-operator/pkg/storage/s3.(*driver).bucketExists(0xc0015809c0, 0xc001132180, 0x21, 0x0, 0x203000)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/storage/s3/s3.go:439 +0x106
github.com/openshift/cluster-image-registry-operator/pkg/storage/s3.(*driver).CreateStorage(0xc0015809c0, 0xc001169180, 0xc0006c46c0, 0xc0011522d0)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/storage/s3/s3.go:503 +0x2e30
github.com/openshift/cluster-image-registry-operator/pkg/resource.(*Generator).syncStorage(0xc0008bcf30, 0xc001169180, 0x0, 0x0)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/resource/generator.go:149 +0x15b
github.com/openshift/cluster-image-registry-operator/pkg/resource.(*Generator).Apply(0xc0008bcf30, 0xc001169180, 0x0, 0x378)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/resource/generator.go:219 +0x4d
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).createOrUpdateResources(0xc0010f1900, 0xc001169180, 0x7, 0xc000d0b801)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:210 +0x18c
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).sync(0xc0010f1900, 0x0, 0xc001ce0d90)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:264 +0x1328
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).eventProcessor.func1(0xc0010f1900, 0x2791760, 0x3169160)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:367 +0xb2
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).eventProcessor(0xc0010f1900)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:374 +0x56
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000d2d190)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d2d190, 0x31a5440, 0xc000c10030, 0x1e701, 0xc000514ae0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000d2d190, 0x3b9aca00, 0x0, 0x1, 0xc000514ae0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc000d2d190, 0x3b9aca00, 0xc000514ae0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).Run
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:445 +0x1a5
panic: runtime error: slice bounds out of range [3:2] [recovered]
	panic: runtime error: slice bounds out of range [3:2]

goroutine 422 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109
panic(0x2af0d20, 0xc000fc50b0)
	/usr/lib/golang/src/runtime/panic.go:965 +0x1b9
github.com/aws/aws-sdk-go/aws/signer/v4.getURIPath(0xc000d44990, 0x0, 0x2cb7c29)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/uri_path.go:14 +0x15e
github.com/aws/aws-sdk-go/aws/signer/v4.(*signingCtx).buildCanonicalString(0xc001c06ba8)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:656 +0xd0
github.com/aws/aws-sdk-go/aws/signer/v4.(*signingCtx).build(0xc001c06ba8, 0x3203f00, 0xc000054098, 0xc001411900)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:533 +0x30d
github.com/aws/aws-sdk-go/aws/signer/v4.Signer.signWithBody(0xc000d7fd80, 0x0, 0x31a2c80, 0xc000e12088, 0x10100, 0x2e2b398, 0x0, 0xc000d10700, 0x31bbb80, 0xc0010fcb80, ...)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:350 +0x3b3
github.com/aws/aws-sdk-go/aws/signer/v4.SignSDKRequestWithCurrentTime(0xc0006d0a00, 0x2e2b398, 0xc000e120c0, 0x1, 0x1)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:481 +0x2d5
github.com/aws/aws-sdk-go/aws/signer/v4.BuildNamedHandler.func1(0xc0006d0a00)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:436 +0x52
github.com/aws/aws-sdk-go/aws/request.(*HandlerList).Run(0xc0006d0be8, 0xc0006d0a00)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/request/handlers.go:267 +0x99
github.com/aws/aws-sdk-go/aws/request.(*Request).Sign(0xc0006d0a00, 0x10c318fa79, 0x45cb5c0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/request/request.go:429 +0xd6
github.com/aws/aws-sdk-go/aws/request.(*Request).Send(0xc0006d0a00, 0x0, 0x0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/request/request.go:526 +0xe6
github.com/aws/aws-sdk-go/service/s3.(*S3).HeadBucketWithContext(0xc000e120b8, 0x3203f48, 0xc000054098, 0xc000d2d3f0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/service/s3/api.go:5355 +0xb3
github.com/openshift/cluster-image-registry-operator/pkg/storage/s3.(*driver).bucketExists(0xc0015809c0, 0xc001132180, 0x21, 0x0, 0x203000)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/storage/s3/s3.go:439 +0x106
github.com/openshift/cluster-image-registry-operator/pkg/storage/s3.(*driver).CreateStorage(0xc0015809c0, 0xc001169180, 0xc0006c46c0, 0xc0011522d0)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/storage/s3/s3.go:503 +0x2e30
github.com/openshift/cluster-image-registry-operator/pkg/resource.(*Generator).syncStorage(0xc0008bcf30, 0xc001169180, 0x0, 0x0)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/resource/generator.go:149 +0x15b
github.com/openshift/cluster-image-registry-operator/pkg/resource.(*Generator).Apply(0xc0008bcf30, 0xc001169180, 0x0, 0x378)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/resource/generator.go:219 +0x4d
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).createOrUpdateResources(0xc0010f1900, 0xc001169180, 0x7, 0xc000d0b801)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:210 +0x18c
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).sync(0xc0010f1900, 0x0, 0xc001ce0d90)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:264 +0x1328
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).eventProcessor.func1(0xc0010f1900, 0x2791760, 0x3169160)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:367 +0xb2
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).eventProcessor(0xc0010f1900)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:374 +0x56
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000d2d190)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d2d190, 0x31a5440, 0xc000c10030, 0x1e701, 0xc000514ae0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000d2d190, 0x3b9aca00, 0x0, 0x1, 0xc000514ae0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc000d2d190, 0x3b9aca00, 0xc000514ae0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).Run
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:445 +0x1a5
@dmage
Copy link
Member

dmage commented Feb 4, 2022

@nate-duke can you try us-east-1?

@nate-duke
Copy link
Author

nate-duke commented Feb 4, 2022

Thanks for taking a look @dmage. Looks the same to me. Let me know if you need more info.

The bucket does already exist, and the credentials configured in image-registry-private-configuration-user work just fine when used with either mc or s3cmd independently.

Come to think of it ... it's odd that the operator controller is trying to reach out to the bucket at all. It doesn't have the credentials available to it unless it's just reading the secret directly through the api.

storage:
    s3:
      bucket: okd4-dev-registry
      region: us-east-1
      regionEndpoint: powerscale-system-hostname:9021
E0204 12:23:11.412691       1 runtime.go:78] Observed a panic: runtime.boundsError{x:3, y:2, signed:true, code:0x3} (runtime error: slice bounds out of range [3:2])
goroutine 455 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x2af0d20, 0xc000f716f8)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
panic(0x2af0d20, 0xc000f716f8)
	/usr/lib/golang/src/runtime/panic.go:965 +0x1b9
github.com/aws/aws-sdk-go/aws/signer/v4.getURIPath(0xc001159b00, 0x0, 0x2cb7c29)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/uri_path.go:14 +0x15e
github.com/aws/aws-sdk-go/aws/signer/v4.(*signingCtx).buildCanonicalString(0xc0020b4ba8)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:656 +0xd0
github.com/aws/aws-sdk-go/aws/signer/v4.(*signingCtx).build(0xc0020b4ba8, 0x3203f00, 0xc000054098, 0xc0014c1e40)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:533 +0x30d
github.com/aws/aws-sdk-go/aws/signer/v4.Signer.signWithBody(0xc000d6be80, 0x0, 0x31a2c80, 0xc000e21b10, 0x10100, 0x2e2b398, 0x0, 0xc001123900, 0x31bbb80, 0xc000f19ee0, ...)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:350 +0x3b3
github.com/aws/aws-sdk-go/aws/signer/v4.SignSDKRequestWithCurrentTime(0xc0009ca000, 0x2e2b398, 0xc000e21b38, 0x1, 0x1)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:481 +0x2d5
github.com/aws/aws-sdk-go/aws/signer/v4.BuildNamedHandler.func1(0xc0009ca000)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:436 +0x52
github.com/aws/aws-sdk-go/aws/request.(*HandlerList).Run(0xc0009ca1e8, 0xc0009ca000)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/request/handlers.go:267 +0x99
github.com/aws/aws-sdk-go/aws/request.(*Request).Sign(0xc0009ca000, 0x10cb297640, 0x45cb5c0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/request/request.go:429 +0xd6
github.com/aws/aws-sdk-go/aws/request.(*Request).Send(0xc0009ca000, 0x0, 0x0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/request/request.go:526 +0xe6
github.com/aws/aws-sdk-go/service/s3.(*S3).HeadBucketWithContext(0xc000e21b30, 0x3203f48, 0xc000054098, 0xc000dff9e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/service/s3/api.go:5355 +0xb3
github.com/openshift/cluster-image-registry-operator/pkg/storage/s3.(*driver).bucketExists(0xc001329140, 0xc0012dc270, 0x21, 0x0, 0x203000)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/storage/s3/s3.go:439 +0x106
github.com/openshift/cluster-image-registry-operator/pkg/storage/s3.(*driver).CreateStorage(0xc001329140, 0xc0020aaa80, 0xc000620900, 0xc0011260f0)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/storage/s3/s3.go:503 +0x2e30
github.com/openshift/cluster-image-registry-operator/pkg/resource.(*Generator).syncStorage(0xc000be69f0, 0xc0020aaa80, 0x0, 0x0)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/resource/generator.go:149 +0x15b
github.com/openshift/cluster-image-registry-operator/pkg/resource.(*Generator).Apply(0xc000be69f0, 0xc0020aaa80, 0x0, 0x378)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/resource/generator.go:219 +0x4d
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).createOrUpdateResources(0xc000bedea0, 0xc0020aaa80, 0x7, 0xc0012b8e01)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:210 +0x18c
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).sync(0xc000bedea0, 0x0, 0xc001295d90)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:264 +0x1328
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).eventProcessor.func1(0xc000bedea0, 0x2791760, 0x3169160)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:367 +0xb2
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).eventProcessor(0xc000bedea0)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:374 +0x56
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000dff720)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000dff720, 0x31a5440, 0xc0015d7170, 0xc000b24f01, 0xc000f29c80)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000dff720, 0x3b9aca00, 0x0, 0x1, 0xc000f29c80)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc000dff720, 0x3b9aca00, 0xc000f29c80)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).Run
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:445 +0x1a5
panic: runtime error: slice bounds out of range [3:2] [recovered]
	panic: runtime error: slice bounds out of range [3:2]

goroutine 455 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109
panic(0x2af0d20, 0xc000f716f8)
	/usr/lib/golang/src/runtime/panic.go:965 +0x1b9
github.com/aws/aws-sdk-go/aws/signer/v4.getURIPath(0xc001159b00, 0x0, 0x2cb7c29)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/uri_path.go:14 +0x15e
github.com/aws/aws-sdk-go/aws/signer/v4.(*signingCtx).buildCanonicalString(0xc0020b4ba8)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:656 +0xd0
github.com/aws/aws-sdk-go/aws/signer/v4.(*signingCtx).build(0xc0020b4ba8, 0x3203f00, 0xc000054098, 0xc0014c1e40)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:533 +0x30d
github.com/aws/aws-sdk-go/aws/signer/v4.Signer.signWithBody(0xc000d6be80, 0x0, 0x31a2c80, 0xc000e21b10, 0x10100, 0x2e2b398, 0x0, 0xc001123900, 0x31bbb80, 0xc000f19ee0, ...)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:350 +0x3b3
github.com/aws/aws-sdk-go/aws/signer/v4.SignSDKRequestWithCurrentTime(0xc0009ca000, 0x2e2b398, 0xc000e21b38, 0x1, 0x1)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:481 +0x2d5
github.com/aws/aws-sdk-go/aws/signer/v4.BuildNamedHandler.func1(0xc0009ca000)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:436 +0x52
github.com/aws/aws-sdk-go/aws/request.(*HandlerList).Run(0xc0009ca1e8, 0xc0009ca000)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/request/handlers.go:267 +0x99
github.com/aws/aws-sdk-go/aws/request.(*Request).Sign(0xc0009ca000, 0x10cb297640, 0x45cb5c0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/request/request.go:429 +0xd6
github.com/aws/aws-sdk-go/aws/request.(*Request).Send(0xc0009ca000, 0x0, 0x0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/aws/request/request.go:526 +0xe6
github.com/aws/aws-sdk-go/service/s3.(*S3).HeadBucketWithContext(0xc000e21b30, 0x3203f48, 0xc000054098, 0xc000dff9e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/github.com/aws/aws-sdk-go/service/s3/api.go:5355 +0xb3
github.com/openshift/cluster-image-registry-operator/pkg/storage/s3.(*driver).bucketExists(0xc001329140, 0xc0012dc270, 0x21, 0x0, 0x203000)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/storage/s3/s3.go:439 +0x106
github.com/openshift/cluster-image-registry-operator/pkg/storage/s3.(*driver).CreateStorage(0xc001329140, 0xc0020aaa80, 0xc000620900, 0xc0011260f0)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/storage/s3/s3.go:503 +0x2e30
github.com/openshift/cluster-image-registry-operator/pkg/resource.(*Generator).syncStorage(0xc000be69f0, 0xc0020aaa80, 0x0, 0x0)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/resource/generator.go:149 +0x15b
github.com/openshift/cluster-image-registry-operator/pkg/resource.(*Generator).Apply(0xc000be69f0, 0xc0020aaa80, 0x0, 0x378)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/resource/generator.go:219 +0x4d
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).createOrUpdateResources(0xc000bedea0, 0xc0020aaa80, 0x7, 0xc0012b8e01)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:210 +0x18c
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).sync(0xc000bedea0, 0x0, 0xc001295d90)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:264 +0x1328
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).eventProcessor.func1(0xc000bedea0, 0x2791760, 0x3169160)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:367 +0xb2
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).eventProcessor(0xc000bedea0)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:374 +0x56
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000dff720)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000dff720, 0x31a5440, 0xc0015d7170, 0xc000b24f01, 0xc000f29c80)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000dff720, 0x3b9aca00, 0x0, 0x1, 0xc000f29c80)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc000dff720, 0x3b9aca00, 0xc000f29c80)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).Run
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:445 +0x1a5

@dmilde
Copy link

dmilde commented May 2, 2022

Experiencing the same issue with vSphere (UPI) + NetApp StorageGRID when trying to update Openshift 4.8 to 4.9.
This breaks updates for us so we created a RedHat case.

Sorry for not providing any new info here (besides other storage endpoints are affected as well), but I can add some once we get a response.

@dmage
Copy link
Member

dmage commented May 2, 2022

Please update your regionEndpoint so that it starts with http:// or https://.
This problem is tracked in Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2066388

@dmilde
Copy link

dmilde commented May 2, 2022

Please update your regionEndpoint so that it starts with http:// or https://. This problem is tracked in Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2066388

Thank you, that fixed the error for us.

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 31, 2022
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 31, 2022
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this as completed Sep 30, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 30, 2022

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants