Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bump to kube 1.25.2 #320

Merged
merged 18 commits into from
Oct 25, 2022
Merged

Conversation

sanchezl
Copy link
Contributor

@sanchezl sanchezl commented Oct 5, 2022

  • bump 1.25.0: UIDs must be initialized earlier during create
  • bump 1.25.0: new images.v1 conversion functions
  • bump 1.25.0: fixup webhook_test
  • bump 1.25.0: PodAffinityNamespaceSelector is GA
  • bump 1.25.0: use go-restful v3
  • bump 1.25.0: admission plugin constructor signature changed
  • bump 1.25.0: EditRegistriesConfig func signature changed
  • bump 1.25.0: implement rest.Storage.Destroy()
  • bump 1.24.0: resource names must be lowercase only
  • bump 1.24.0: fix TestOpenAPIRoundtrip
  • bump 1.24.0: deprecated clustername property
  • bump 1.24.0: rest.ValidNamespace removed
  • bump 1.25.2: bump go.mod deps
  • bump 1.25.0: use go 1.19

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 5, 2022
@sanchezl sanchezl changed the title bump to kube 1.25.0 WIP: bump to kube 1.25.0 Oct 5, 2022
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Oct 5, 2022
@sanchezl sanchezl changed the title WIP: bump to kube 1.25.0 WIP: bump to kube 1.25.2 Oct 6, 2022
@flavianmissi
Copy link
Member

/retest

@sanchezl sanchezl force-pushed the kube-1.25.0 branch 2 times, most recently from 27a9001 to f66bbbf Compare October 7, 2022 19:47
@sanchezl
Copy link
Contributor Author

sanchezl commented Oct 7, 2022

/test e2e-aws

@flavianmissi
Copy link
Member

/retest

1 similar comment
@xiuwang
Copy link

xiuwang commented Oct 10, 2022

/retest

@flavianmissi
Copy link
Member

/retest-required

@dmage
Copy link
Contributor

dmage commented Oct 10, 2022

/retest

@flavianmissi
Copy link
Member

it looks like kube is trying to match apps.openshift.io group against apps, since that doesn't match we're getting the cannot find the storage version kind for *autoscaling.Scale error[1]. I've added some debug logs around (formatted for readability):

I1011 15:30:36.435097       1 installer.go:152] storageVersioner: runtime.multiGroupVersioner{
target:schema.GroupVersion{Group:"apps.openshift.io", Version:"v1"}, acceptedGroupKinds:
[]schema.GroupKind{schema.GroupKind{Group:"apps.openshift.io", Kind:""}, schema.GroupKind{
Group:"apps.openshift.io", Kind:""}}, coerce:false}
[...]
F1011 15:30:36.435455       1 openshift_apiserver.go:379] unable to install api resources: unable to setup API 
[...]: error in registering resource: deploymentconfigs/scale, 
cannot find the storage version kind for *autoscaling.Scale
- fqKinds: 
[]schema.GroupVersionKind{schema.GroupVersionKind{Group:"apps", Version:"__internal", Kind:"Scale"}, 
schema.GroupVersionKind{Group:"autoscaling", Version:"__internal", Kind:"Scale"}, 
schema.GroupVersionKind{Group:"extensions", Version:"__internal", Kind:"Scale"}}

I have no idea why this is happening or how to fix it 😅

[1] https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/endpoints/installer.go?plain=1#L153

@flavianmissi
Copy link
Member

can confirm that adding &autoscaling.Scale{} to the list of known types here: https://github.com/openshift/openshift-apiserver/blob/master/pkg/apps/apis/apps/register.go?plain=1#L36 gets the apiserver pod back running.

@dmage
Copy link
Contributor

dmage commented Oct 11, 2022

@flavianmissi
Copy link
Member

@dmage not sure I follow - extensions.AddToScheme is there, and that includes autoscaling. Or do I misunderstand?

@dmage
Copy link
Contributor

dmage commented Oct 11, 2022

nvmd, https://github.com/kubernetes/kubernetes/blob/v1.25.2/pkg/apis/apps/register.go#L57 - I guess you are right and you want to do it the same way

@flavianmissi
Copy link
Member

/retest

1 similar comment
@xiuwang
Copy link

xiuwang commented Oct 12, 2022

/retest

@flavianmissi
Copy link
Member

I'm getting an error when running oc start-build (using the same data as https://github.com/openshift/origin/blob/master/test/extended/builds/s2i_quota.go#L43).

$ oc create -f test/extended/testdata/builds/test-s2i-build-quota.json # pwd is $GOPATH/src/github.com/openshift/origin
buildconfig.build.openshift.io/s2i-build-quota created

$ oc start-build s2i-build-quota --from-dir test/extended/testdata/builds/build-quota
Uploading directory "test/extended/testdata/builds/build-quota" as binary input for the build ...

Uploading finished
Error from server (InternalError): Internal error occurred: system metadata was not initialized

In the apiserver pod logs:

I1012 11:31:40.188917       1 httplog.go:131] "HTTP" verb="GET" URI="/apis/build.openshift.io/v1/namespaces/build-
test/buildconfigs/s2i-build-quota" latency="1.098656ms" userAgent="oc/4.12.0 (darwin/amd64) kubernetes/3c85519" 
audit-ID="5b0c1725-ef4f-4318-a48d-e7b81746a3c3" srcIP="10.128.0.1:56558" resp=200
I1012 11:31:40.242665       1 rest.go:181] failed to validate binary: &build.BinaryBuildRequestOptions{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"s2i-
build-quota", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, 
CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, 
DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), 
OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, AsFile:"", 
Commit:"7b769f1ba95352562db1682cfb1c814c5b0c83eb", Message:"Merge pull request #27460 from dgoodwin/fix-unit-
tests\n\nFix unit tests that have broken since we stopped running them 2 years ago", AuthorName:"OpenShift Merge Robot",
AuthorEmail:"openshift-merge-robot@users.noreply.github.com", CommitterName:"GitHub", 
CommitterEmail:"noreply@github.com"}
I1012 11:31:40.243152       1 httplog.go:131] "HTTP" verb="POST" URI="/apis/build.openshift.io/v1/namespaces/build-
test/buildconfigs/s2i-build-quota/instantiatebinary?name=s2i-build-quota&namespace=build-test&
revision.authorEmail=openshift-merge-robot%40users.noreply.github.com&
revision.authorName=OpenShift+Merge+Robot&revision.commit=7b769f1ba95352562db1682cfb1c814c5b0c83eb&
revision.committerEmail=noreply%40github.com&revision.committerName=GitHub&
revision.message=Merge+pull+request+%2327460+from+dgoodwin%2Ffix-unit-tests%0A
%0AFix+unit+tests+that+have+broken+since+we+stopped+running+them+2+years+ago" latency="902.269µs" 
userAgent="oc/4.12.0 (darwin/amd64) kubernetes/3c85519" audit-ID="bda9432b-e199-4c78-9506-26115c582f59" 
srcIP="10.128.0.1:56558" resp=500 statusStack=<

	goroutine 148622 [running]:
	k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00535c790, 0x2a709b2?)
		k8s.io/apiserver@v0.25.2/pkg/server/httplog/httplog.go:324 +0x90
	k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00535c790, 0x1f4?)
		k8s.io/apiserver@v0.25.2/pkg/server/httplog/httplog.go:306 +0x25
	k8s.io/apiserver/pkg/endpoints/filters.(*auditResponseWriter).WriteHeader(0xc005852d80, 0x600000c004079800?)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/filters/audit.go:257 +0x3c
	k8s.io/apiserver/pkg/endpoints/metrics.(*ResponseWriterDelegator).WriteHeader(0x403e940?, 0xc006260f20?)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/metrics/metrics.go:678 +0x29
	k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.(*deferredResponseWriter).Write(0xc005853260, {0xc003a7c000, 0xfb, 0x1000})
		k8s.io/apiserver@v0.25.2/pkg/endpoints/handlers/responsewriters/writers.go:232 +0x5b8
	encoding/json.(*Encoder).Encode(0xc006393500, {0x45c5440, 0xc0056966e0})
		encoding/json/stream.go:231 +0x1fe
	k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).doEncode(0x0?, {0x4cda200?, 0xc0056966e0?}, {0x4ccb140, 0xc005853260})
		k8s.io/apimachinery@v0.25.2/pkg/runtime/serializer/json/json.go:246 +0x19a
	k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).Encode(0xc000392280, {0x4cda200, 0xc0056966e0}, {0x4ccb140, 0xc005853260})
		k8s.io/apimachinery@v0.25.2/pkg/runtime/serializer/json/json.go:220 +0xfc
	k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).doEncode(0xc005696820, {0x4cda200, 0xc0056966e0}, {0x4ccb140, 0xc005853260}, {0x0?, 0x0?})
		k8s.io/apimachinery@v0.25.2/pkg/runtime/serializer/versioning/versioning.go:268 +0xd45
	k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).encode(0xc005696820, {0x4cda200, 0xc0056966e0}, {0x4ccb140, 0xc005853260}, {0x0?, 0x0?})
		k8s.io/apimachinery@v0.25.2/pkg/runtime/serializer/versioning/versioning.go:214 +0x167
	k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).Encode(0xc006508000?, {0x4cda200?, 0xc0056966e0?}, {0x4ccb140?, 0xc005853260?})
		k8s.io/apimachinery@v0.25.2/pkg/runtime/serializer/versioning/versioning.go:207 +0x33
	k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.SerializeObject({0x4673f0a, 0x10}, {0x7f643c03f8c8, 0xc005696820}, {0x4cf6168?, 0xc003e56bc0}, 0xc00504da00, 0x1f4, {0x4cda200, 0xc0056966e0})
		k8s.io/apiserver@v0.25.2/pkg/endpoints/handlers/responsewriters/writers.go:107 +0x5dd
	k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObjectNegotiated.func2()
		k8s.io/apiserver@v0.25.2/pkg/endpoints/handlers/responsewriters/writers.go:278 +0x5f
	k8s.io/apiserver/pkg/endpoints/request.(*durationTracker).Track(0xc007e1f3e0, 0xc00070f8c0)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/request/webhook_duration.go:75 +0x93
	k8s.io/apiserver/pkg/endpoints/request.TrackSerializeResponseObjectLatency({0x4cf7290?, 0xc007e1f6b0?}, 0xc00070f8c0)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/request/webhook_duration.go:216 +0x5e
	k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObjectNegotiated({0x4cf5e98, 0xc0029178c0}, {0x4cf6108, 0x6b279b0}, {{0x4679478?, 0x0?}, {0x4659a05?, 0x696395cebbc24fc6?}}, {0x4cf6168, 0xc003e56bc0}, ...)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/handlers/responsewriters/writers.go:277 +0x67f
	k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.ErrorNegotiated({0x4ccaa80?, 0xc005696640?}, {0x4cf5e98, 0xc0029178c0}, {{0x4679478?, 0x248114e56e?}, {0x4659a05?, 0x40?}}, {0x4cf6168, 0xc003e56bc0}, ...)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/handlers/responsewriters/writers.go:298 +0x273
	k8s.io/apiserver/pkg/endpoints/handlers.(*RequestScope).err(...)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/handlers/rest.go:114
	k8s.io/apiserver/pkg/endpoints/handlers.(*responder).Error(0xc006176e00?, {0x4ccaa80?, 0xc005696640?})
		k8s.io/apiserver@v0.25.2/pkg/endpoints/handlers/rest.go:243 +0x9e
	github.com/openshift/openshift-apiserver/pkg/build/apiserver/registry/buildconfiginstantiate.(*binaryInstantiateHandler).ServeHTTP(0xc006176e00, {0x4cf7290?, 0xc007e1f980?}, 0xc004b5d47f?)
		github.com/openshift/openshift-apiserver/pkg/build/apiserver/registry/buildconfiginstantiate/rest.go:172 +0xb5
	k8s.io/apiserver/pkg/endpoints/handlers.ConnectResource.func1.1()
		k8s.io/apiserver@v0.25.2/pkg/endpoints/handlers/rest.go:226 +0x246
	k8s.io/apiserver/pkg/endpoints/metrics.RecordLongRunning(0xc00504da00, 0x401acc0?, {0x4663a4b, 0x9}, 0xc006394758)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/metrics/metrics.go:472 +0x571
	k8s.io/apiserver/pkg/endpoints/handlers.ConnectResource.func1({0x4cf6168, 0xc003e56bc0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/handlers/rest.go:220 +0xc7b
	k8s.io/apiserver/pkg/endpoints.restfulConnectResource.func1(0xc003e56ba0, 0xc006178a10)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/installer.go:1228 +0x6f
	k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1(0xc003e56ba0, 0xc006178a10)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/metrics/metrics.go:528 +0x22c
	github.com/emicklei/go-restful/v3.(*Container).dispatch(0xc0028ac870, {0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		github.com/emicklei/go-restful/v3@v3.8.0/container.go:299 +0x616
	github.com/emicklei/go-restful/v3.(*Container).Dispatch(...)
		github.com/emicklei/go-restful/v3@v3.8.0/container.go:204
	k8s.io/apiserver/pkg/server.director.ServeHTTP({{0x4693a92?, 0x0?}, 0xc0028ac870?, 0xc0008db2d0?}, {0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/server/handler.go:146 +0x4db
	k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0047a5300, {0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/server/mux/pathrecorder.go:255 +0x2cd
	k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc004b5d440?, {0x4cf6168?, 0xc003e569c0?}, 0xc0028fcb10?)
		k8s.io/apiserver@v0.25.2/pkg/server/mux/pathrecorder.go:235 +0x73
	k8s.io/apiserver/pkg/server.director.ServeHTTP({{0x4694486?, 0x0?}, 0xc0029250e0?, 0xc0001e6d90?}, {0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/server/handler.go:154 +0x61f
	k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc001e8d080, {0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/server/mux/pathrecorder.go:255 +0x2cd
	k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc004b5d440?, {0x4cf6168?, 0xc003e569c0?}, 0x0?)
		k8s.io/apiserver@v0.25.2/pkg/server/mux/pathrecorder.go:235 +0x73
	k8s.io/apiserver/pkg/server.director.ServeHTTP({{0x469af85?, 0x0?}, 0xc0022d07e0?, 0xc0007d8850?}, {0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/server/handler.go:154 +0x61f
	k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002478200, {0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/server/mux/pathrecorder.go:255 +0x2cd
	k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc004b5d440?, {0x4cf6168?, 0xc003e569c0?}, 0xc?)
		k8s.io/apiserver@v0.25.2/pkg/server/mux/pathrecorder.go:235 +0x73
	k8s.io/apiserver/pkg/server.director.ServeHTTP({{0x4695152?, 0x0?}, 0xc0013b7b90?, 0xc00067c380?}, {0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/server/handler.go:154 +0x61f
	k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0044d1240, {0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/server/mux/pathrecorder.go:255 +0x2cd
	k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc004b5d440?, {0x4cf6168?, 0xc003e569c0?}, 0x0?)
		k8s.io/apiserver@v0.25.2/pkg/server/mux/pathrecorder.go:235 +0x73
	k8s.io/apiserver/pkg/server.director.ServeHTTP({{0x469526a?, 0x0?}, 0xc002448a20?, 0xc0001d50a0?}, {0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/server/handler.go:154 +0x61f
	k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00284b9c0, {0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/server/mux/pathrecorder.go:255 +0x2cd
	k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc004b5d440?, {0x4cf6168?, 0xc003e569c0?}, 0xc0003922d0?)
		k8s.io/apiserver@v0.25.2/pkg/server/mux/pathrecorder.go:235 +0x73
	k8s.io/apiserver/pkg/server.director.ServeHTTP({{0x469e759?, 0x0?}, 0xc0039d3050?, 0xc0006cc310?}, {0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/server/handler.go:154 +0x61f
	k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003f75d80, {0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/server/mux/pathrecorder.go:255 +0x2cd
	k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc004b5d440?, {0x4cf6168?, 0xc003e569c0?}, 0xc0051c3980?)
		k8s.io/apiserver@v0.25.2/pkg/server/mux/pathrecorder.go:235 +0x73
	k8s.io/apiserver/pkg/server.director.ServeHTTP({{0x469e8cd?, 0x0?}, 0xc00322c5a0?, 0xc0004805b0?}, {0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/server/handler.go:154 +0x61f
	k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002738980, {0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/server/mux/pathrecorder.go:255 +0x2cd
	k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc004b5d440?, {0x4cf6168?, 0xc003e569c0?}, 0xc005c63ab8?)
		k8s.io/apiserver@v0.25.2/pkg/server/mux/pathrecorder.go:235 +0x73
	k8s.io/apiserver/pkg/server.director.ServeHTTP({{0x467ca8e?, 0x2a68e5f?}, 0xc002af0bd0?, 0xc00069e4d0?}, {0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/server/handler.go:154 +0x61f
	k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5
	net/http.HandlerFunc.ServeHTTP(0x4cf7290?, {0x4cf6168?, 0xc003e569c0?}, 0x4?)
		net/http/server.go:2109 +0x2f
	k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1({0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/filters/authorization.go:64 +0x4f4
	net/http.HandlerFunc.ServeHTTP(0x0?, {0x4cf6168?, 0xc003e569c0?}, 0x0?)
		net/http/server.go:2109 +0x2f
	k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178
	net/http.HandlerFunc.ServeHTTP(0xc00504da00?, {0x4cf6168?, 0xc003e569c0?}, 0xc0005e2050?)
		net/http/server.go:2109 +0x2f
	k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1({0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/server/filters/maxinflight.go:162 +0x2d3
	net/http.HandlerFunc.ServeHTTP(0xc007e1f5f0?, {0x4cf6168?, 0xc003e569c0?}, 0x1ef602a?)
		net/http/server.go:2109 +0x2f
	k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5
	net/http.HandlerFunc.ServeHTTP(0x25?, {0x4cf6168?, 0xc003e569c0?}, 0xc0058f2900?)
		net/http/server.go:2109 +0x2f
	k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1({0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/filters/impersonation.go:50 +0x21c
	net/http.HandlerFunc.ServeHTTP(0xa?, {0x4cf6168?, 0xc003e569c0?}, 0xc005c64760?)
		net/http/server.go:2109 +0x2f
	k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178
	net/http.HandlerFunc.ServeHTTP(0xc007e1f5f0?, {0x4cf6168?, 0xc003e569c0?}, 0x1ef602a?)
		net/http/server.go:2109 +0x2f
	k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x4cf6168, 0xc003e569c0}, 0xc00504da00)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5
	net/http.HandlerFunc.ServeHTTP(0x4cf8678?, {0x4cf6168?, 0xc003e569c0?}, 0xc0002b76c0?)
		net/http/server.go:2109 +0x2f
	k8s.io/apiserver/pkg/endpoints/filters.WithAudit.func1({0x4cf6168?, 0xc003e56960}, 0xc00504d900)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/filters/audit.go:117 +0x59c
	net/http.HandlerFunc.ServeHTTP(0xa?, {0x4cf6168?, 0xc003e56960?}, 0xc005c64b00?)
		net/http/server.go:2109 +0x2f
	k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x4cf6168, 0xc003e56960}, 0xc00504d900)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178
	net/http.HandlerFunc.ServeHTTP(0xc007e1f560?, {0x4cf6168?, 0xc003e56960?}, 0x1ef602a?)
		net/http/server.go:2109 +0x2f
	k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x4cf6168, 0xc003e56960}, 0xc00504d900)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5
	net/http.HandlerFunc.ServeHTTP(0x4cf7290?, {0x4cf6168?, 0xc003e56960?}, 0x4caa588?)
		net/http/server.go:2109 +0x2f
	k8s.io/apiserver/pkg/endpoints/filters.withAuthentication.func1({0x4cf6168, 0xc003e56960}, 0xc00504d900)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/filters/authentication.go:80 +0x622
	net/http.HandlerFunc.ServeHTTP(0x4cf7290?, {0x4cf6168?, 0xc003e56960?}, 0x4caa4c0?)
		net/http/server.go:2109 +0x2f
	k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x4cf6168, 0xc003e56960}, 0xc00504d600)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/filterlatency/filterlatency.go:89 +0x330
	net/http.HandlerFunc.ServeHTTP(0xc00504d600?, {0x4cf6168?, 0xc003e56960?}, 0x1ab326a?)
		net/http/server.go:2109 +0x2f
	k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc002afc858, {0x4cf6168, 0xc003e56960}, 0x60?)
		k8s.io/apiserver@v0.25.2/pkg/server/filters/timeout.go:86 +0x342
	k8s.io/apiserver/pkg/endpoints/filters.withRequestDeadline.func1({0x4cf6168, 0xc003e56960}, 0xc00504d600)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/filters/request_deadline.go:65 +0x5f2
	net/http.HandlerFunc.ServeHTTP(0xc00504d600?, {0x4cf6168?, 0xc003e56960?}, 0x45a4100?)
		net/http/server.go:2109 +0x2f
	k8s.io/apiserver/pkg/server/filters.withWaitGroup.func1({0x4cf6168?, 0xc003e56960}, 0xc00504d600)
		k8s.io/apiserver@v0.25.2/pkg/server/filters/waitgroup.go:51 +0x7b3
	net/http.HandlerFunc.ServeHTTP(0x4cf7290?, {0x4cf6168?, 0xc003e56960?}, 0x7f64659a7a68?)
		net/http/server.go:2109 +0x2f
	k8s.io/apiserver/pkg/endpoints/filters.WithAuditAnnotations.func1({0x4cf6168, 0xc003e56960}, 0xc00504d500)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/filters/audit_annotations.go:36 +0x125
	net/http.HandlerFunc.ServeHTTP(0x4cf7290?, {0x4cf6168?, 0xc003e56960?}, 0x4caa4c0?)
		net/http/server.go:2109 +0x2f
	k8s.io/apiserver/pkg/endpoints/filters.WithWarningRecorder.func1({0x4cf6168?, 0xc003e56960}, 0xc00504d400)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/filters/warning.go:35 +0x18d
	net/http.HandlerFunc.ServeHTTP(0x44f52e0?, {0x4cf6168?, 0xc003e56960?}, 0xd?)
		net/http/server.go:2109 +0x2f
	k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1({0x4cf6168, 0xc003e56960}, 0xc003e56940?)
		k8s.io/apiserver@v0.25.2/pkg/endpoints/filters/cachecontrol.go:31 +0x126
	net/http.HandlerFunc.ServeHTTP(0x4cf88a8?, {0x4cf6168?, 0xc003e56960?}, 0x4caa4c0?)
		net/http/server.go:2109 +0x2f
	created by golang.org/x/net/http2.(*serverConn).processHeaders
		golang.org/x/net@v0.0.0-20220909164309-bea034e7d591/http2/server.go:1960 +0x5b9
 > addedInfo=<

	logging error output: "{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":
\"Internal error occurred: system metadata was not initialized\",\"reason\":\"InternalError\",\"details\":{\"causes\":
[{\"message\":\"system metadata was not initialized\"}]},\"code\":500}\n"

From the logs it does look like the build.BinaryBuildRequestOptions has a zero CreationTimestamp and an empty UID, thus this check[1] fails. Seems like this check was moved on 1.25[2].

[1] https://github.com/kubernetes/apiserver/blob/master/pkg/registry/rest/create.go#L102-L104
[2] kubernetes/apiserver@970b3ee

@flavianmissi
Copy link
Member

I was able to run oc start-build after initializing the metadata fields before this call to rest.BeforeCreate: https://github.com/openshift/openshift-apiserver/blob/master/pkg/build/apiserver/registry/buildconfiginstantiate/rest.go?plain=1#L169

Something like:

func (h *binaryInstantiateHandler) handle(r io.Reader) (runtime.Object, error) {
	h.options.Name = h.name

	// Init metadata as early as possible.
	if objectMeta, err := meta.Accessor(h.options); err != nil {
		return nil, err
	} else {
		rest.FillObjectMetaSystemFields(objectMeta)
	}

	if err := rest.BeforeCreate(BinaryStrategy, h.ctx, h.options); err != nil {
		klog.Infof("failed to validate binary: %#v", h.options)
		return nil, err
	}
        [...]

Taken from kubernetes/apiserver@970b3ee#diff-8a686b851cb0fb6a74cabe61fd1f7da9125cc4c4f6611a8df10ce869c1d68267R385-R390

@xiuwang
Copy link

xiuwang commented Oct 14, 2022

/retest

1 similar comment
@flavianmissi
Copy link
Member

/retest

@flavianmissi
Copy link
Member

/retest-required

@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Oct 20, 2022
@dmage
Copy link
Contributor

dmage commented Oct 20, 2022

/retest

@sanchezl
Copy link
Contributor Author

/retest

flavianmissi added a commit to flavianmissi/openshift-apiserver that referenced this pull request Oct 24, 2022
@sanchezl
Copy link
Contributor Author

/retest

3 similar comments
@xiuwang
Copy link

xiuwang commented Oct 25, 2022

/retest

@flavianmissi
Copy link
Member

/retest

@sanchezl
Copy link
Contributor Author

/retest

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 25, 2022

@sanchezl: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-builds 9575cac link true /test e2e-aws-builds
ci/prow/e2e-cmd 9575cac link false /test e2e-cmd
ci/prow/e2e-aws 9575cac link true /test e2e-aws
ci/prow/e2e-aws-serial 9575cac link true /test e2e-aws-serial
ci/prow/e2e-aws-upgrade 9575cac link true /test e2e-aws-upgrade

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@tkashem
Copy link
Contributor

tkashem commented Oct 25, 2022

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Oct 25, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 25, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: sanchezl, tkashem

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@benluddy
Copy link
Contributor

/retest-required

@benluddy
Copy link
Contributor

/test e2e-aws-serial

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 25, 2022

@benluddy: The specified target(s) for /test were not found.
The following commands are available to trigger required jobs:

  • /test e2e-aws-ovn
  • /test e2e-aws-ovn-builds
  • /test e2e-aws-ovn-serial
  • /test e2e-aws-ovn-upgrade
  • /test images
  • /test unit
  • /test verify
  • /test verify-deps

The following commands are available to trigger optional jobs:

  • /test e2e-aws-ovn-cmd

Use /test all to run all jobs.

In response to this:

/test e2e-aws-serial

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@benluddy
Copy link
Contributor

/test e2e-aws-ovn-serial

@benluddy
Copy link
Contributor

/test e2e-aws-ovn

@openshift-merge-robot openshift-merge-robot merged commit 503fdac into openshift:master Oct 25, 2022
flavianmissi added a commit to flavianmissi/openshift-apiserver that referenced this pull request Oct 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants