New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[operator] Add Helm chart #204
[operator] Add Helm chart #204
Conversation
Codecov Report
@@ Coverage Diff @@
## master #204 +/- ##
=======================================
Coverage 35.46% 35.46%
=======================================
Files 37 37
Lines 1844 1844
=======================================
Hits 654 654
Misses 1079 1079
Partials 111 111 Continue to review full report at Codecov.
|
1150e1c
to
0f30d4d
Compare
0f30d4d
to
80e4264
Compare
@krol3 I consider this ready for review at this point! I'm running low on time to do everything I planned (basic CI test for example), but this is now in a functional state it seems from my local testing. |
Hi @consideRatio I tested and It's working the helm chart. I will close my PR. |
- name: OPERATOR_TARGET_NAMESPACES | ||
value: {{ tpl .Values.targetNamespaces . | quote }} | ||
- name: OPERATOR_METRICS_BIND_ADDRESS | ||
value: ":8080" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could use: {{ print ":" .Values.image.metricsPort | quote }}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can avoid needing to add configuration of this because users will only access this through the k8s Service, which in turn points to the Pod's port named metrics
.
Hmmm, the k8s Service port can currently be configured with service.port
, but I think we should name that metricsPort
like you suggest here as that probably makes it less confusing about what the service is meant for currently, and also in case we want to expose something else that isn't metrics on another port.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Users of another Helm chat I've worked with have been fine without configuring the container/pod port for a very long time, so we opted there to not add it at any point in time: https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/master/jupyterhub/templates/hub/deployment.yaml#L199-L201
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added 9143ca1 to rename service.port to service.metricsPort - do you agree this is sufficient?
# have annotations which will help prometheus as a target for | ||
# scraping of metrics | ||
- name: metrics | ||
containerPort: 8080 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also here: {{ .Values.image.metricsPort }}
A readability improvement.
Thank you @krol3 for your review and testing that things seem to work on your end as well ❤️ 🎉 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job @consideRatio ! I really appreciate your contribution. I also tested the Helm chart in my cluster and it works as expected.
I left a few comments / questions and in general I do agree with your assumptions that the follow up tasks, such as CI integration and updates to the README, can be done in separate PRs.
Before we merge this one, I'm wondering whether we should add a template to define the VulnerabilityReports custom resource. Some people say that the operator itself should not install the CRDs that it manages, but here we have a Helm chart, which provides means of installation.
Beyond that, I assume that the Helm chart is self contained, i.e. does not require $ starboard init
or $ kubectl starboard init
command to be run.
That said, do you think we can somehow symbolically link to https://github.com/aquasecurity/starboard/blob/master/deploy/crd/vulnerabilityreports.crd.yaml and send it to Kubernetes API along with other Helm templates? Otherwise we may assume that a deployer defines CRs with kubectl create
command.
I also realized that the operator should (programmatically) check if CRs are defined before it spawns any scan job. Otherwise we're waisting resources just to find out that we cannot save a report because of an unknown resource.
Thank you for your review @danielpacak! 🎉 ❤️
Helm 3 supports this, but it is an evolving best practice. I think its the right call to bundle the CRDs with the helm chart. They won't be templates that render with values, and they won't be managed by Helm after install in general I think. For more info, see: https://helm.sh/docs/chart_best_practices/custom_resource_definitions/
|
Thanks for adding the crds to the chart. I didn't know that it's supported by Helm 3! Regarding caveats, I think we're good. If someone wants to managed CRD upgrades we'd suggest installing with OLM / https://operatorhub.io/operator/starboard-operator anyway. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once again great job @consideRatio ! I'm going to merge the PR. As mentioned in the conversation we can follow up with in dedicated PR for such tasks as automated integration tests run as part of our CI workflow.
Regarding the support for multiple Helm releases running in the same cluster, I think we cannot do much about that. This problem is addressed by Operator Lifecycle Manager, where you define an OperatorGroup to configure multi-tenancy support of the operator. For example, if target namespaces specified by two different OperatorGroup instances intersect, the OLM won't validate such configuration.
I'll update docs / installation guide with Helm chart in this PR #201
Wieee! Thank you for your review and encouragement @danielpacak ❤️ 🎉 🌻 |
I understood that you are accepting contributions where a Helm chart is defined for the starboard-operator, so this is meant to fix #187. Me and @krol3 both started working on this in parallel, but as discussed in #197 (comment) continue with a PR here that @krol3 will review!
PR ambition
In this PR, I've tried to follow the current state of the evolving best practices of Helm charts and created a foundation that will be relatively easy to maintain in the future. As an example, I've tried to avoid the anti-patterns of hardcoding support for a limited set of environment variables which would have needed to be updated as the starboard-operator binary added more configuration options.
Part of this PR
helm install|upgrade
Undecided if part of PR
helm template
which only triggers when the chart folder is changed.Not part of this PR
values.schema.json
that automatically validates passed Helm values tohelm template|install|upgrade
Things to consider