-
Notifications
You must be signed in to change notification settings - Fork 397
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make operator workable on disconnected clusters #1234
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great @muellerfabi , this is something that have been requested for a long time.
I'm not too sure about updating the default grafana image, yes digests is best from a security point of view, but it also lowers the readability by allot.
Since you can overwrite the grafana image on your own, I don't see this as a huge issue in a disconnected env.
We do get a bit of a chicken-and-egg issue though, we normally perform a manual prepare PR that cantinas the updated bundle/manifests
file before creating the tag. When creating the tag the container image will be built and published.
We could change our way of working in to cutting the release first and then update the docs, but it will look a bit strange in the commit history.
I'm leaning towards the second way, building the image locally, get the digest and add it to the bundle/manifests
file manually before creating the tag.
Could you document how to do this in https://github.com/grafana-operator/grafana-operator/blob/master/PREPARE_RELEASE.md?
Container question that I'm way too lazy to test my self, how does it work with different build CPU architectures? Does that give the same digest?
I need to do some refactoring because |
f2949e7
to
0c826f5
Compare
Alright, I added another target in Makefile Cannot tell if it is a good idea to keep it as it is, i.e. keeping the CSV with image tags in master like before, but running
The image digest should contain all available architectures, because:
... |
ae0c085
to
8513a19
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay in review.
I have added a few comments, I also have a general question.
I can't decide either on how we should manage the tag vs OLM files.
@grafana-operator/maintainers what do you think?
Should we build the image locally to get the digest and then upload it before releasing the tag?
Or should we create the tag and do a separate PR after where we fix the digest and everything OLM related?
In reality, we don't even need to patch the bundle/manifests file in this repo. But at the same time, it feels good to keep them in sync with the config that we have with the OLM repo.
This is quite the conundrum, On one hand, I don't like the idea of having our repo-based OLM files out of sync for a release, on the other hand, I don't think we really expect users to use our manifests directly right? Most of the manifest related use cases are for installation through OLM/OperatorHub. So, my proposal essentially is:
|
Yes, I'm leaning towards that as well. We could even create a GitHub actions that automatically runs after the image tag have been generated and create a PR to both the repos. |
@muellerfabi we had an internal discussion. But if you can take a look at the comments that I made, that would be great. |
So I did a few changes to Aren't all this config generated when running It would also be nice to run |
Created #1261 to automate the OLM release processes after this PR gets merged. |
Not sure if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for my very slow reply, currently I have lots of things to do.
First of all, great job with the PR.
The only issue I can find is when I run make bundle/redhat
, the output shows me.
My guess is that manager in coming from config/default/manager_config_patch.yaml
or something like that. Not that it really matters, but it looks ugly ;).
Is it some easy way to get rid of it?
+ - image: docker.io/grafana/grafana@sha256:ff68ed4324e471ffa269aa5308cdcf12276ef2d5a660daea95db9d629a32a7d8
+ name: grafana
+ - image: ghcr.io/grafana-operator/grafana-operator@sha256:54784cb7c79a70740ed48052561c567af84af21f11a6d258060f4c2689a934c4
+ name: manager
+ - image: ghcr.io/grafana-operator/grafana-operator@sha256:54784cb7c79a70740ed48052561c567af84af21f11a6d258060f4c2689a934c4
+ name: grafana-operator-54784cb7c79a70740ed48052561c567af84af21f11a6d258060f4c2689a934c4-annotation
Also, please run make bundle/redhat
one time so when we do it locally in the future the diff isn't that big. When I do a release, I normally look at the diffs and see how they look. In general, it's only image and date that is changed, more or less.
I am afraid, but you probably have to accept the noisy output, as far as i understand the pullspec module does not accept the "-q". Even The duplicate image entry was related to the containerImage annotation in the CSV. I think it is not required anyway. While trying to figure out why there are duplicate entries under relatedImages, I stumbled over these things (that are not 100% related to this PR):
|
@muellerfabi really nice, @pb82 will give this a try inside OLM and other than that I think we are getting ready to merge it :) About your questions that is not related to the PR.
If I remember correctly, we use the one with descriptions when we generate the API.md file, which is the base for our CRD docs. .
My guess is that it's part of the default yaml when you create an operator using operator-sdk. We just didn't think of it, so we can probably remove it.
Probably same as above, part of the default, and I don't think we ship that service by default. But to be honest I don't know. |
As a side-note, ImageContentSourcePolicy is deprecated in 4.13 and will be replaced by ImageDigestMirrorSet and ImageTagMirrorSet. It seems ImageTagMirrorSet would be able to handle the main problem? |
What is the status? Any changes required from my side?
That is true, the need to use digests in disconnected environments is now softened. Yet in enterprise environments there is still a desire having a reasonable image. |
That is a good question. Life is of course easier if we don't have to create digests, but as you say, they are never bad to have. So I guess let's keep it as is. Please update the PR, and we can see if some of the redhat maintainers got time to verify that everything is working with the new OLM config. If so, we should be okay to merge as I see it. |
45654a7
to
4559275
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me, thanks @muellerfabi
The generated bundle no longer contains a version, but a sha
@muellerfabi @NissesSenap we have now two Makefile targets to create a bundle ( |
LGTM, @pb82 will do a final verification, then we should be able to merge this. |
In order to make the operator useable on disconnected clusters it is required to use the image digest, instead of tag. [1]
Why?
OpenShift uses a so called
ImageContentSourcePolicy
object that "translate between the image references stored in Operator manifests and the mirrored registry." [2] This only works with image digests - not tags.The lines I added in Makefile are from an empty operator-sdk skeleton.
[1] https://docs.openshift.com/container-platform/4.12/operators/admin/olm-restricted-networks.html
[2] https://docs.openshift.com/container-platform/4.12/installing/disconnected_install/installing-mirroring-installation-images.html#olm-mirror-catalog-manifests_installing-mirroring-installation-images