Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handling NATS clusters in multiple namespaces #134

Open
lwierzbi-work opened this issue Jul 17, 2023 · 5 comments
Open

Handling NATS clusters in multiple namespaces #134

lwierzbi-work opened this issue Jul 17, 2023 · 5 comments

Comments

@lwierzbi-work
Copy link

lwierzbi-work commented Jul 17, 2023

Hi,
I have k8s cluster with two NATS clusters in two different namespaces (dev and stg environment), installed NACK in dev with helm chart, on install in stg I have bumped into resource ownership confilict:

helm install nack-jsc nats/nack --set jetstream.nats.url=nats://nats:4222 -n nats-io-${TIER} 

Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "jetstream-controller-cluster-role" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "nats-io-stg": current value is "nats-io-dev"

Since NATS URL is one of NACK install parameters, I assume that one instance can controll only one NATS cluster. Is there a way to have several deployments of NACK, each controlling configuration of different NATS cluster within single k8s cluster?

@TarasLykhenko
Copy link

I am also hitting this issue. Is it possible to implement a similar solution flux that uses labels to partition the CRs over multiple instances of the same controller?

https://fluxcd.io/flux/installation/configuration/sharding/#assign-resources-to-shards

@arguile-
Copy link

arguile- commented Feb 2, 2024

I tried using Accounts -- which specify the server -- in order to do this but it seems like its ignored and uses the underlying NACK config.

Eg.

---
apiVersion: jetstream.nats.io/v1beta2
kind: Account
metadata:
  name: a
spec:
  name: a
  servers:
  - nats://nats.namespace_a.svc.cluster.local:4222

---
apiVersion: jetstream.nats.io/v1beta2
kind: Stream
metadata:
  name: foo
spec:
  name: foo
  subjects: ["foo", "foo.>"]
  storage: file
  replicas: 1
  account: a # <-- Create stream using account A information

I would have expected the account information to be respected and used in creating the stream.

@arguile-
Copy link

arguile- commented Feb 2, 2024

For anyone struggling we this, our end work-around was to change to namespaced: true and then centrally manage the stream CRDs for each cluster within the controller namespace.

This doesn't allow adding the CRDs between various project namespaces throughout the cluster though so may not work for everyone.

@nico151999
Copy link

There is a flag called crdConnect. If the nats URL is omitted the chart currently just enables this flag instead of using the one nats URL globally. If I'm not mistaken the intention of the controller developers is to either have one global config via values.yaml or n configs via accounts. I didn't find it documented anywhere though. That's just what I found by having a quick look at the code. Maybe this behaviour will change. There are multiple things that seem not to be documented (or I couldn't find it) which I wonder what they are used for, e.g. the jetstream.enabled value in the nack chart.

@darkrift
Copy link

For this to work, you need to use crdConnect which is activated from the chart when there are no url defined for jetstream nats. You need to get rid of --set jetstream.nats.url=nats://nats:4222

When the url is empty, it switches mode to use the Account nats server urls from the account attribute on the stream. Otherwise, it ignores those and switches to use the "global nats url"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants