New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-29003: move azure storage blobs from docker
back into /docker
#998
OCPBUGS-29003: move azure storage blobs from docker
back into /docker
#998
Conversation
@flavianmissi: This pull request references Jira Issue OCPBUGS-29003, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
8da416d
to
0542d18
Compare
docker
back into /docker
docker
back into /docker
@flavianmissi: An error was encountered querying GitHub for users with public email (wewang@redhat.com) for bug OCPBUGS-29003 on the Jira server at https://issues.redhat.com/. No known errors were detected, please see the full error message for details. Full error message.
Post "http://ghproxy/graphql": dial tcp 172.30.229.2:80: connect: connection refused
Please contact an administrator to resolve this issue, then request a bug refresh with In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
azure tests are failing due to:
I got this error during my tests and the reason was that the image used by the controller to launch the job container was not the same as the image I had built. I need to investigate and see if the same is happening on the test. |
I don't know why yet, but the image used by the job pod is (and it's fetched from an env var
but the image used by the operator pod is:
we want them to match. |
Looks like the IMAGE env var is the image-registry image. I'm looking into the best way to use the operator image for the job instead. |
762da36
to
2fc4238
Compare
* Dockerfile: add move-blobs binary to image * cmd/move-blobs: prioritise account key for auth when present
this fixes the path bug on the storage level by migrating blobs from the `docker` virtual directory into the `/docker` virtual directory instead. * azurepathfix: give job required env vars * handle different job conditions on azure path fix controller * azurepathfixjob: set restart policy to never * azurepathfixjob: export account key from secret * azurepathfixjob: add azure account key env var to job container * pass account key to path fix job (this time for reals) * azurepathfix: only allow new controller to run on Azure * also mount trusted-ca volume to job so the script works when a cluster-wide proxy is in use. * fix a couple of mispellings * manifests: export operator image env var to container * manifests: update generated files * mount volumes for proxy certs on azure path fix job * azurepathfixcontroller: return error if account name is not present
2fc4238
to
234698e
Compare
/hold cancel |
/cherry-pick release-4.15 |
@flavianmissi: once the present PR merges, I will cherry-pick it on top of release-4.15 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dmage, flavianmissi The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold cancel |
/retest |
@flavianmissi: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/hold cancel |
docker
back into /docker
docker
back into /docker
docker
back into /docker
docker
back into /docker
@flavianmissi: Jira Issue OCPBUGS-29003: Some pull requests linked via external trackers have merged: The following pull requests linked via external trackers have not merged:
These pull request must merge or be unlinked from the Jira bug in order for it to move to the next state. Once unlinked, request a bug refresh with Jira Issue OCPBUGS-29003 has not been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
@flavianmissi: new pull request created: #1000 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[ART PR BUILD NOTIFIER] This PR has been included in build ose-cluster-image-registry-operator-container-v4.16.0-202402161440.p0.g9bda27f.assembly.stream.el9 for distgit ose-cluster-image-registry-operator. |
This undoes the effects caused by bug OCPBUGS-29003.
It needs to land in tandem with openshift/image-registry#392.
Testing
On a 4.13.z cluster, push a few images to the internal registry.
(Optional) Start some deployments from the pushed images.
Upgrade to 4.14.z (NOTE! use a 4.14.z that does not contain the bug fix).
Pulling will no longer work for images pushed on 4.13.z.
Push new images. To test more parts of the code, ensure that you push both images that were not pushed to 4.13.z as well as SOME images that were already pushed to 4.13.z (NOTE! make sure you still have images pushed to 4.13.z that cannot be pulled at this point).
Update the cluster to a 4.14.z version that contains the fix. I think if you use ClusterBot to create a release build, then you can upgrade to that build using oc adm upgrade --to-image= (but I have not tried this).
While the migration job is running, the operator will change to "Progressing: true". Once the migration job successfully finishes running, the operator should become available again.
Test the above steps, but using with a cluster wide proxy.
If you know other configurations for the registry that could affect the migration pod job, test them too.
Test deploying a cluster containing the fix to a different cloud. The migration job azure-path-fix should not be created and the operator should become available as it normally would.