-
Notifications
You must be signed in to change notification settings - Fork 637
[helm] Use variable for release name in secrets deployment #4010
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
b6cf3d4 to
ff0c704
Compare
ff0c704 to
1e718b6
Compare
| @@ -0,0 +1,31 @@ | |||
| # Copyright 2020 The Magma Authors. | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For helm 3, requirements got moved to the Chart.yaml file, so no need for this. You should see the dependencies in Chart.yaml if you rebase. Also, while you're at it can you update the requirements.lock to be named Chart.lock in line with helm3 expectations. See: https://helm.sh/blog/helm-3-preview-pt5/
mpgermano
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The secrets refactor seems good to me, but I don't think we should be using a value to set the release name.
|
|
||
| {{- define "orchestrator-config-template" -}} | ||
| useGRPCExporter: true | ||
| prometheusGRPCPushAddress: "{{ .Values.fullReleaseName }}-prometheus-cache:9092" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For all of these I think we should use {{ .Release.Name }} instead and remove the fullReleaseName value.
I think it's fair to assume that the Release Name will be the same for both the secrets charts and the main orc8r release. I think there's greater risk that the release name for both is not orc8r, but the value doesn't get updated, leaving us in a bad state.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah ok. I was basing this off of how we deploy on minikube in which the secrets chart is deployed with release name orc8r-secrets. Will change though since that's not normal.
1e718b6 to
bfa7df4
Compare
Signed-off-by: Scott8440 <scott8440@gmail.com>
bfa7df4 to
39b4d15
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, just remove the remaining val in orc8r. If release name mismatch ever causes issues we should just remove the release name prefix for the metrics services and just use hardcored orc8r-. This is a pattern we're using for orc8r services now as it helps standardize things. It seems unlikely to me that we're going to need to support multiple orc8r releases in the same namespace.
| # secrets sub-chart configuration. | ||
| secrets: | ||
| create: false | ||
| fullReleaseName: orc8r |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not needed anymore
Signed-off-by: Scott8440 scott8440@gmail.com
Summary
The secrets deployment was using hardcoded
orc8ras the prefix for service names, and calls to{{ .Release.Name }}would use the release name of the secrets deployment, which could lead to incorrect addresses. This didn't happen because non-default deployment names are rare, but fixing it here will prevent this from being an issue.Introduce a variable
fullReleaseNameto the secrets chart and build the yml config files off of that.Test Plan