Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP Add extra manifests during 06_deploy_bootstrap_vm.sh #127

Closed
wants to merge 1 commit into from
Closed

WIP Add extra manifests during 06_deploy_bootstrap_vm.sh #127

wants to merge 1 commit into from

Conversation

hardys
Copy link

@hardys hardys commented Mar 7, 2019

We add a create manifests step before generating the ignition
configs, because the assets in kni-installer will read all files
in the location, so we can easily patch in the extra manifests
needed.

In future it may be cleaner to add the manifest assets directly
to kni-installer, but for now this should be a simple way to
iterate on testing the manifests.

Note that the index prefix was changed to align with the existing
manifests generated by kni_installer, I'm assuming using a larger
index here will result in the desired serialization so these are
applied post-openshift things, testing needed to confirm this.

done
```
These manifests will be applied by the installer in order, based on the
prefix, but not that currently kni-installer uses prefixes of 99 for most
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo s/not/note

@derekhiggins
Copy link
Collaborator

Build SUCCESS, see build http://10.8.144.11:8080/job/dev-tools/133/

We add a create manifests step before generating the ignition
configs, because the assets in kni-installer will read all files
in the location, so we can easily patch in the extra manifests
needed.

In future it may be cleaner to add the manifest assets directly
to kni-installer, but for now this should be a simple way to
iterate on testing the manifests.

Note that the index prefix was changed to align with the existing
manifests generated by kni_installer, I'm assuming using a larger
index here will result in the desired serialization so these are
applied post-openshift things, testing needed to confirm this.
@derekhiggins
Copy link
Collaborator

Build SUCCESS, see build http://10.8.144.11:8080/job/dev-tools/135/

@hardys
Copy link
Author

hardys commented Mar 8, 2019

Ok so this appears to apply the manifests (I see the kubevirt CRD) but kubevirt doesn't yet come up - from talking with @fabiand it sounds like we're blocked on some testing of kubevirt on OCP4 on other platforms, so we should probably revisit debugging when that is completed, and metalkube can deploy workers.

Probably no harm in deploying with these manifests though, provided the deploy still works and it doesn't add much time to the walltime.

@booxter
Copy link
Contributor

booxter commented Mar 8, 2019

@hardys I don't think it actually works. Yes I see kubevirt CRD but then it fails as follows:

[root@zeus07 manifests]# oc --config ../ocp/auth/kubeconfig apply -f 110_cnv_kubevirt_op.yaml 
Warning: oc apply should be used on resource created by either oc create --save-config or oc apply
Warning: oc apply should be used on resource created by either oc create --save-config or oc apply
customresourcedefinition.apiextensions.k8s.io/kubevirts.kubevirt.io configured
Warning: oc apply should be used on resource created by either oc create --save-config or oc apply
clusterrole.rbac.authorization.k8s.io/kubevirt.io:operator configured
Warning: oc apply should be used on resource created by either oc create --save-config or oc apply
clusterrolebinding.rbac.authorization.k8s.io/kubevirt-operator configured
Error from server (Forbidden): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"labels\":{\"kubevirt.io\":\"\"},\"name\":\"kubevirt\",\"namespace\":\"\"}}\n"},"namespace":""}}
to:
Resource: "/v1, Resource=namespaces", GroupVersionKind: "/v1, Kind=Namespace"
Name: "kubevirt", Namespace: ""
Object: &{map["kind":"Namespace" "apiVersion":"v1" "metadata":map["name":"kubevirt" "selfLink":"/api/v1/namespaces/kubevirt" "uid":"cfc4fdce-41bb-11e9-a9c0-52fdfc072182" "resourceVersion":"904" "creationTimestamp":"2019-03-08T16:04:11Z" "labels":map["kubevirt.io":""]] "spec":map["finalizers":["kubernetes"]] "status":map["phase":"Active"]]}
for: "110_cnv_kubevirt_op.yaml": namespaces "kubevirt" is forbidden: caches not synchronized
Error from server (Forbidden): error when creating "110_cnv_kubevirt_op.yaml": serviceaccounts "kubevirt-operator" is forbidden: caches not synchronized
Error from server (Forbidden): error when creating "110_cnv_kubevirt_op.yaml": deployments.apps "virt-operator" is forbidden: caches not synchronized

This is weird because I was able to bring up kubevirt several days ago.

@booxter
Copy link
Contributor

booxter commented Mar 8, 2019

@hardys I see the following errors in bootkube.service logs too:

Mar 08 19:46:25 localhost.localdomain openshift.sh[3479]: Error from server (Forbidden): error when creating "./111_cnv_kubevirt_cr.yaml": kubevirts.kubevirt.io "kubevirt" is forbidden: caches not synchronized
Mar 08 19:46:25 localhost.localdomain openshift.sh[3479]: kubectl create --filename ./111_cnv_kubevirt_cr.yaml failed. Retrying in 5 seconds...

Which is the same error as when trying to apply manifests manually.

@booxter
Copy link
Contributor

booxter commented Mar 8, 2019

I think there is some issue with kubevirt namespace used for KubeVirt manifests. When the manifests use the namespace then I get the error above. When they use kube-system instead, they apply just fine. (Automatically.) Perhaps we should temporarily revert to using kube-system for CNV / KubeVirt services and create an issue to investigate why there is an issue with caches that forbid creation for legit resources in a separate namespace.

@fabiand
Copy link
Contributor

fabiand commented Mar 11, 2019

Hm, let's not move kubevirt to kube-system, but let's understand why we have the kubevirt anemspace issues.

@slintes you are an expert in this area. Thoughts?

@hardys
Copy link
Author

hardys commented Mar 11, 2019

@vrutkovs perhaps you can help outline the plan as discussed on slack here, my understanding is short-term we need to replace this approach with some script that will run after the masters are deployed (with the wait stuff from #130), but that long term we should be able to correctly serialize the manifests but I'm still not completely clear on what is needed to make that work?

@hardys hardys changed the title Add extra manifests during 06_deploy_bootstrap_vm.sh WIP Add extra manifests during 06_deploy_bootstrap_vm.sh Mar 11, 2019
@vrutkovs
Copy link
Collaborator

long term we should be able to correctly serialize the manifests but I'm still not completely clear on what is needed to make that work?

Yes, according to https://github.com/openshift/installer/blob/master/docs/user/customization.md#kubernetes-customization-unvalidated the installer can tell CVO to apply additional manifests during/after install, similar to what #137 is doing. In the end it would become a part of kni-install

@hardys
Copy link
Author

hardys commented Mar 15, 2019

Ok closing this for now and we'll apply the manifests with a script, then revisit applying them via kni-installer later

@hardys hardys closed this Mar 15, 2019
hardys pushed a commit to hardys/dev-scripts that referenced this pull request Mar 15, 2019
We expect this to be handled via kni-installer but there were
issues with that ref openshift-metal3#127 so lets just apply them via a script
for now, which is probably easier for test/dev workflows anyway.
hardys pushed a commit to hardys/dev-scripts that referenced this pull request Mar 19, 2019
We expect this to be handled via kni-installer but there were
issues with that ref openshift-metal3#127 so lets just apply them via a script
for now, which is probably easier for test/dev workflows anyway.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants