New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restoring deployment config with history leads to weird state #20729
Comments
@smarterclayton I believe this is broken since 3.7 (haven't confirmed that). |
/cc @openshift/sig-master |
The fact that we can't rollout immediately after adoption is a DC bug - how the owner references are exported is more for cli and possibly related to new plans w.r.t to export |
Yeah, there should be a way to remove the UUID inside owneRefs. I would expect this will also affect builds (@bparees). |
technically, you need to remove the whole ownerref |
To be clear, there seems to be 2 issues:
|
I think @deads2k would tell us this is an import problem (export should export everything it knows, import needs to be more judicious about what gets stripped/mutated/cleaned during import). But yeah, i would guess builds and also various service broker objects (including templateinstances i think) would be affected by this... pretty much anything w/ an ownerRef, no? |
In this case "import" is |
yeah i understand, the point is that import needs to exist because that is where such logic would need to live. (or some secondary tooling that you run to process your exported resources before you send them to oc create). |
I like |
Nope,
Nada! |
+1 to @soltysh's responses. overloading create doesn't seem like the right answer here. |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Consider this scenario:
What happens here is basically all replication controllers are deleted right after they are created. Then a new RC is created with
revision=1
... This is because the replication controllers haveownerRefs
set to DC which was deleted and the UUID does not match with the newly created DC.If you edit the
backup.yaml
and remove allownerRef
fields from the RCs and recreate everythingthen the 3 RC's will stay, but the
revision
for the DC is set to1
instead of3
. My guess is that the adoption is broken or we simply forget to bump the revision to match the currently active RC...That means, when you do
oc rollout latest test
, it will tell you that it successfully rolled out, but nothing will happen (just the DC revision is bumped) until you call that command three times. On fourth time, it will actually trigger a new rollout. My guess here is that the controller will fail to create the RC because it already exists, then on reconcile it sees that the RC is there and bump the revision.*#20728
The text was updated successfully, but these errors were encountered: