You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I believe the installation can be simplified much more, even without Helm.
Motivation
In order to allow smooth onboarding and easy getting started with NATS and JetStream, I believe that how easy it is to install and get our hands dirty is the key. Current installation step requires some understanding on how NATS Server cluster needs to be running, JS enabled NATS node also needs to be added, etc.
It could be handled by Helm Chat perhaps. But that may also mean that Helm is the default, and some familiarity with Helm would then be required for managing this controller.
Background
There are a few good examples, such as cert-manager and Argo projects. They have a single YAML file for installation, which bundles all CRDs, RBAC, etc.
You could achieve the similar setup with Helm Chart, but when you are introducing a controller to the cluster, the installation of the controller itself can be usually simple and straightforward (like above examples).
Even in complex scenario such as Istio, they have their own installation CLI istioctl, which actually simply generates YAML, applies to the cluster, and ensures clean installation. The CLI has much more features to it for debugging, management, etc., but the installation itself is straightforward.
You can find the installation guide using istioctlhttps://istio.io/latest/docs/setup/install/istioctl/, but there is also a way to generate the entire YAML instead: istioctl manifest generate > istio.yaml. After that, you can do kubectl apply -f istio.yaml to deploy all components.
Implementation Ideas
I think there are about 3 approaches:
Like cert-manager, each release could generate a bundled YAML as a build artifact. This would be a part of release CI job.
Like argo/argo-cd, bundled YAML can be generated ahead of the release, and point the release tag to it.
Adopt Helm only installation, and do not support individual YAML as installation path.
Given that the controller currently assumes a NATS Server cluster is installed to the cluster already, using Helm seems to be the only viable option at the moment. I think it would make sense for the controller to support generating NATS Server clusters in the future, and thus may be better to allow controller to run by itself. This allows having multiple NATS Server clusters in a single K8s, and still have single controller to manage all CRDs (each CRD will then be able to target NATS Server cluster of their choice). With ValidatingWebhook, we can also ensure any JetStream CRD will be rejected if created without running NATS in the cluster.
Other Notes
I don't mean to self-promote this, but I have started putting together some JS getting started doc, mainly for myself and my teammates to learn. You can see how the installation step has so much implementation details, which you may not need to know when you are just to play with it.
The text was updated successfully, but these errors were encountered:
Thanks @rytswd for the ideas, I hope as well the setup could be simplified much further. I think we could have a replacement for the nats-operator within the nack repo that manages the NATS Server instances as CRDs without using Helm, it just needs some careful design for the CRDs so that they 'do not get in the way' as it happens with the nats-operator and implement all the options from https://github.com/nats-io/nats-server/blob/master/server/opts.go
With ValidatingWebhook, we can also ensure any JetStream CRD will be rejected if created without running NATS in the cluster.
By the way, right now the jetstream controller only has a single NATS connections so would only be possible to manage JetStream CRDs from a NATS Server, but that also means that the streams do not have to be within the same cluster, they could represent remote streams instead as well.
Enhancement Idea
Goal
I believe the installation can be simplified much more, even without Helm.
Motivation
In order to allow smooth onboarding and easy getting started with NATS and JetStream, I believe that how easy it is to install and get our hands dirty is the key. Current installation step requires some understanding on how NATS Server cluster needs to be running, JS enabled NATS node also needs to be added, etc.
It could be handled by Helm Chat perhaps. But that may also mean that Helm is the default, and some familiarity with Helm would then be required for managing this controller.
Background
There are a few good examples, such as cert-manager and Argo projects. They have a single YAML file for installation, which bundles all CRDs, RBAC, etc.
Ref:
https://cert-manager.io/docs/installation/kubernetes/
https://argoproj.github.io/argo/quick-start/
You could achieve the similar setup with Helm Chart, but when you are introducing a controller to the cluster, the installation of the controller itself can be usually simple and straightforward (like above examples).
Even in complex scenario such as Istio, they have their own installation CLI
istioctl
, which actually simply generates YAML, applies to the cluster, and ensures clean installation. The CLI has much more features to it for debugging, management, etc., but the installation itself is straightforward.You can find the installation guide using
istioctl
https://istio.io/latest/docs/setup/install/istioctl/, but there is also a way to generate the entire YAML instead:istioctl manifest generate > istio.yaml
. After that, you can dokubectl apply -f istio.yaml
to deploy all components.Implementation Ideas
I think there are about 3 approaches:
Given that the controller currently assumes a NATS Server cluster is installed to the cluster already, using Helm seems to be the only viable option at the moment. I think it would make sense for the controller to support generating NATS Server clusters in the future, and thus may be better to allow controller to run by itself. This allows having multiple NATS Server clusters in a single K8s, and still have single controller to manage all CRDs (each CRD will then be able to target NATS Server cluster of their choice). With ValidatingWebhook, we can also ensure any JetStream CRD will be rejected if created without running NATS in the cluster.
Other Notes
I don't mean to self-promote this, but I have started putting together some JS getting started doc, mainly for myself and my teammates to learn. You can see how the installation step has so much implementation details, which you may not need to know when you are just to play with it.
The text was updated successfully, but these errors were encountered: