-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deploy mctc to the cluster via our internal ArgoCD instance #14
Comments
@david-martin I have a some questions regarding the deployment of mctc in our HCG cluster:
|
I suspect the reason for this is a for simplicity of setup and reuse of argocd secrets.
Yes, for simplicity and for LE account re-usage (limits are tied to the account) |
|
@david-martin @maleck13 @pmccarthy I have been progressing with the deployment of the unstable environment to the HCG cluster and there's already an initial setup. It does not work yet as there are still things to fix, but I wanted to share some thoughts about some problems that we will have if we run both stable and unstable instances in the same cluster: A couple of thoughts about having two instances of HCG running in the same cluster:
We could just deploy unstable for the time being and decide later what to do, but to me, specially the shared CRD item, calls for the need of having one cluster per environment. |
+1 to this.
The controller could be changed to watch for secrets in specific namespaces so there is no overlap. |
@roivaz Agree on the items you outlined. I think a second cluster makes sense but for now we can just deploy unstable as you suggest |
@maleck13 @david-martin I'm reviewing the errors I get in the mctc controller logs and there is a lot of noise in there caused by the kcp-glbc envs: there are secrets in the cluster pointing to those kcp api endpoints and mctc is trying to pick them up and failing miserably (or at least I think that's the problem ...). This might be a good time to do some cleanup, it would help me debug what's going on and get the mctc-unstable env to a healthier status. Any reason to not undeploy the kcp-glbc envs and do some cleanup of other test namespaces (some also contain cluster secrets). I would also suggest removing ACM if we are not using it right now as it also deploys some cluster secrets and stuff. We can always bring back anything we need in the future. |
No. That makes sense.
Agreed. It has served it's purpose already. |
Cleaned up ACM and kcp-glbc envs and some empty demo namespaces. I haven't deleted the kcp-* namespaces in case any cleanup is required first/also in the kcp api servers. |
TODO:
|
|
No description provided.
The text was updated successfully, but these errors were encountered: