This repository has been archived by the owner on Nov 30, 2021. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 181
Upgrade Issue with bringing in kube-registry-proxy #766
Comments
let's see if we can distill this into a base case which we can hopefully ship a PR and functional test upstream to helm. |
It is a possibility that this is due to a k8s regression (been running |
Adding this to the v2.15 milestone. We'll want to re-try this on a v1.6.x cluster. As it stands, we've added |
This issue was moved to teamhephy/workflow#27 |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
In order to switch over from our in-house registry-proxy to the official/upstream
kube-registry-proxy
(as original PR #734 proposed) we will need to sort out the following issue when upgrading.v2.12.0 release candidate testing showed that after a Workflow install that uses the in-house variant of
deis-registry-proxy
(say,v2.11.0
), when one goes to upgrade (helm upgrade luminous-hummingbird workflow-staging/workflow --version v2.12.0
), although thedeis-registry-proxy
pod appears to have been removed, the newluminous-hummingbird-kube-registry-proxy
sometimes does not appear due to a host port conflict:The text was updated successfully, but these errors were encountered: