Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hive upgrade to 1.1.6 is expecting CRD API version to be v1beta instead of v1 #1846

Closed
groenator opened this issue Aug 8, 2022 · 5 comments

Comments

@groenator
Copy link

Hi,

I am experiencing an issue upgrading hive operator from version 1.1.5 to 1.1.6 because the expected CRD version should be v1beta instead of v1.

time="2022-08-08T10:41:33Z" level=info msg="Version: openshift/hive v1.1.5-0-g356f6bf"
time="2022-08-08T10:41:33Z" level=info msg="Starting /healthz and /readyz endpoints"
time="2022-08-08T10:41:33Z" level=info msg="generated leader election ID" id=9902aaa9-f507-4422-a798-3bf6154213a2
I0808 10:41:33.694686       1 leaderelection.go:243] attempting to acquire leader lease hive/hive-operator-leader...
I0808 10:41:33.719960       1 leaderelection.go:253] successfully acquired lease hive/hive-operator-leader
time="2022-08-08T10:41:33Z" level=info msg="became leader" id=9902aaa9-f507-4422-a798-3bf6154213a2
I0808 10:41:35.570230       1 request.go:655] Throttling request took 1.048162031s, request: GET:[https://172.20.0.1:443/apis/batch/v1?timeout=32s](https://172.20.0.1/apis/batch/v1?timeout=32s)
time="2022-08-08T10:41:36Z" level=info msg="Registering Components."
time="2022-08-08T10:41:39Z" level=info msg="hive operator NS: hive"
time="2022-08-08T10:41:39Z" level=info msg="Starting the Cmd."
time="2022-08-08T10:41:39Z" level=info msg="started metrics calculator goroutine"
time="2022-08-08T10:41:39Z" level=info msg="calculating metrics for all Hive" controller=metrics
time="2022-08-08T10:41:39Z" level=info msg="reconcile complete" controller=metrics elapsedMillis=0 elapsedMillisGT=0 outcome=unspecified
time="2022-08-08T10:41:42Z" level=error msg="error running manager" error="no matches for kind \"CustomResourceDefinition\" in version \"apiextensions.k8s.io/v1beta1\""
time="2022-08-08T10:41:42Z" level=error msg="error received after stop sequence was engaged" error="context canceled"
time="2022-08-08T10:41:42Z" level=info msg="leader lost" id=9902aaa9-f507-4422-a798-3bf6154213a2

Cluster CRD API versions:

kubectl get crd |grep hive |grep -v NAME |cut -f1 -d " " |while read mycrd ; do echo ; echo $mycrd ; k get crd $mycrd -o yaml |grep 'kind: CustomResourceDefinition' -B1 ; echo ; done

checkpoints.hive.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

clusterclaims.hive.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

clusterdeployments.hive.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

clusterdeprovisions.hive.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

clusterimagesets.hive.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

clusterpools.hive.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

clusterprovisions.hive.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

clusterrelocates.hive.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

clusterstates.hive.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

clustersyncleases.hiveinternal.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

clustersyncs.hiveinternal.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

dnszones.hive.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

fakeclusterinstalls.hiveinternal.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

hiveconfigs.hive.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

machinepoolnameleases.hive.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

machinepools.hive.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

selectorsyncidentityproviders.hive.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

selectorsyncsets.hive.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

syncidentityproviders.hive.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

syncsets.hive.openshift.io
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

When version 1.1.6 was published I saw a PR for upgrading the CRD API versions to v1.

Are there any issues with version 1.1.6? Is it safe to upgrade the operator to a higher version without breaking the current setup?

I have a few cluster deployments which I cannot delete.

Regards,

@groenator groenator changed the title Hive upgrade to 1.1.6 is expecting CRD API version to be v1beta instead of v1 hive upgrade to 1.1.6 is expecting CRD API version to be v1beta instead of v1 Aug 8, 2022
@2uasimojo
Copy link
Member

Looks like you may be trying to run a really old (>1y) version of hive on a cluster with a newer (>=1.22) version of k8s. Are you getting this from OperatorHub? That's the only place I can imagine 1.1.x to be available.

I would suggest upgrading to a more recent version. If you're using OLM and it's not letting you skip versions, you should be able to uninstall hive entirely (the CRDs and therefore CRs like your ClusterDeployments ought to remain unaffected) and reinstall it e.g. from the tip of the alpha channel in OperatorHub.

@groenator
Copy link
Author

Hi @2uasimojo

Thank you for your reply.

I manage to upgrade hive operator successfully and now everything is back to normal. What you said above I did exactly the same.

I thought at first that if I delete the sub and the CSV would delete the CRD as well. It didn't do that, which is really good :)

I will close this issue.

Regards, Bogdan

@groenator groenator reopened this Aug 8, 2022
@groenator
Copy link
Author

groenator commented Aug 8, 2022

Hi,

I forgot to ask you one thing.

The ClusterDeployment now is showing FailedToStart in the Power state because it cannot validate the AWS credentials. I purposely add some dummy credentials in the past when I was using the old version.

What would be the impact if I leave the status like that for a while?

The AWS credentials are only used to provision resources in AWS, it won't impact hive to manage or apply configurations on the clusters?

Regards,

@2uasimojo
Copy link
Member

I thought at first that if I delete the sub and the CSV would delete the CRD as well. It didn't do that, which is really good :)

Agree. There was at one point a feature request in OLM to support (optional) removal of CRs and CRDs, but I don't think it ever took off.

now is showing FailedToStart in the Power state because it cannot validate the AWS credentials.

Hive supports hibernation/resume by stopping/starting cloud instances. I'm not sure exactly when this feature was introduced, but I wouldn't be surprised if it was waaay after 1.1.5. So what's happening right now is the controller responsible for the cluster's power state is trying to figure out what that state is for the first time, and since it can't connect to the cloud, it's assuming the instances are down. If they're really not, and you're able to access the console/API and do work, then you should be fine to leave it as is. However, it would probably behoove you to fix those creds so you can take advantage of this feature.

@groenator
Copy link
Author

Hi,

I am in the process of migrating all the clusterDeployments from EKS to a new Openshift cluster, and because of this migration, I rather not fix the AWS creds now and just do it when the CD is migrated over.

I can access the console and API fine, the clusters are operating without any issues.

Once again, thanks for your input. I will close this issue for now.

Regards,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants