You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$ kubectl create -f application-monitoring-operator-syncset.yaml
The SyncSet "dev-eng-application-monitoring-operator" is invalid:
* spec.resources[2].APIVersion: Invalid value: "authorization.openshift.io/v1": must use kubernetes group for this resource kind
* spec.resources[3].APIVersion: Invalid value: "authorization.openshift.io/v1": must use kubernetes group for this resource kind
* spec.resources[4].APIVersion: Invalid value: "authorization.openshift.io/v1": must use kubernetes group for this resource kind
* spec.resources[10].APIVersion: Invalid value: "authorization.openshift.io/v1": must use kubernetes group for this resource kind
We can see on hiveadmission pod logs:
time="2020-03-02T16:51:26Z" level=info msg="Validating request" group=hive.openshift.io method=Validate operation=CREATE resource=syncsets version=v1
time="2020-03-02T16:51:26Z" level=info msg="SyncSet.hive.openshift.io \"dev-eng-application-monitoring-operator\" is invalid: [spec.resources[2].APIVersion: Invalid value: \"authorization.openshift.io/v1\": must use kubernetes group for this resource kind, spec.resources[3].APIVersion: Invalid value: \"authorization.openshift.io/v1\": must use kubernetes group for this resource kind, spec.resources[4].APIVersion: Invalid value: \"authorization.openshift.io/v1\": must use kubernetes group for this resource kind, spec.resources[10].APIVersion: Invalid value: \"authorization.openshift.io/v1\": must use kubernetes group for this resource kind]" group=hive.openshift.io method=validateCreate object.Name=dev-eng-application-monitoring-operator operation=CREATE resource=syncsets version=v1
Can you confirm if this is the expected behaviour?
If we deploy the object directly on the Openshift cluster provisioned by hive (without using hive SyncSet), it works as expected:
$ oc apply -f deploy/cluster-roles/alertmanager-clusterrole_binding.yaml
clusterrolebinding.authorization.openshift.io/alertmanager-application-monitoring created
The text was updated successfully, but these errors were encountered:
Yes I believe this is intentional and expected, you can see the relevant commit here: a1dbceb
Trying to recall specifics but I believe these types are quite strange in which they're almost aliases between the k8s type and the openshift type. Interacting with the openshift types was posing some pretty big problems but I can't remember the specifics. @abutcher do you?
Can you switch your yaml to use the kube apigroups for these types instead of openshift?
https://issues.redhat.com/browse/CO-532 is the bug that prompted Hive to reject resources in authorization.openshift.io. The short story is that you should not be using authorization.openshift.io and should use rbac.authorization.k8s.io instead.
We are using hive in order to provision dev clusters on demmand, specifically:
master
f95446d21ae47da6689f8de91c9b1463955223b7
We are trying to deploy application-monitoring-operator using a
SyncSet
object, including for example alertmanager-clusterrole_binding.yaml (whoseapiVersion
isauthorization.openshift.io/v1
), and we got some errors telling that there are some an invalid values:We can see on hiveadmission pod logs:
Can you confirm if this is the expected behaviour?
If we deploy the object directly on the Openshift cluster provisioned by hive (without using hive
SyncSet
), it works as expected:The text was updated successfully, but these errors were encountered: