Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NATS supercluster in kubernetes #78

Closed
vtomar01 opened this issue Jun 26, 2020 · 3 comments
Closed

NATS supercluster in kubernetes #78

vtomar01 opened this issue Jun 26, 2020 · 3 comments
Assignees

Comments

@vtomar01
Copy link

I am using this helm chart to deploy NATS cluster in AWS.
https://github.com/nats-io/k8s/tree/master/helm/charts/nats

I have created 2 cluster, one in AWS Singapore region and another in AWS Sydney region.

Each of the cluster has 3 nodes and uses the statefulset so that cluster routes can be hardcoded.

I need to set up communication between these 2 cluster using gateways.
What are the suggestion to set up gateway URLs ? We can't have static IPs/DNS per pod because our K8s set up is very elastic with auto-scaling enabled so nodes can come up and go down.

I have set up a kubernetes Service of type AWS Loadbalancer to target gateway port and using this loadbalancer DNS as gateway URLs. This seems to work but I see that connection getting dropped repeatedly with this message in NATS pod logs
'Gateway connection closed'.

What are the best practices to form super clusters in kubernetes elastic environments?

@wallyqs
Copy link
Member

wallyqs commented Jun 26, 2020

(fyi transferring this to the nats-io/k8s repo)

  • Which loadbalancer type are you using for the setup? One option could be the NLB service from AWS like this: https://docs.nats.io/nats-on-kubernetes/nats-external-nlb that page only covers client connections but could change the port for gateway connections as well. Using the an NLB for AWS and NATS should be ok because that can be setup to ouse TLS. So you can create another service for the gateways as follows:
apiVersion: v1
kind: Service
metadata:
  name: nats-nlb-gw
  namespace: default
  labels:
    app: nats
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  ports:
  - name: nats
    port: 7522
    protocol: TCP
    targetPort: 7522
  selector:
    app: nats
  • Another option without using the NLB, is to use something like external-dns to dynamically announce the routes and exposing each one of the NATS Servers public ip address and host:port (this is the prod setup for the connect.ngs.global for example).
# Create 3 nodes Kubernetes cluster
eksctl create cluster --name nats-k8s-cluster \
  --nodes 3 \
  --node-type=t3.large \
  --region=eu-west-1

# Get the credentials for your cluster
eksctl utils write-kubeconfig --name $YOUR_EKS_NAME --region eu-west-1

After that is done you get a set of 3 nodes with the example above:

 kubectl get nodes -o wide
NAME                                           STATUS   ROLES    AGE    VERSION   INTERNAL-IP      EXTERNAL-IP     OS-IMAGE         KERNEL-VERSION                  CONTAINER-RUNTIME
ip-192-168-10-213.us-east-2.compute.internal   Ready    <none>   124d   v1.12.7   192.168.10.213   3.17.184.16     Amazon Linux 2   4.14.123-111.109.amzn2.x86_64   docker://18.6.1
ip-192-168-45-209.us-east-2.compute.internal   Ready    <none>   124d   v1.12.7   192.168.45.209   18.218.52.122   Amazon Linux 2   4.14.123-111.109.amzn2.x86_64   docker://18.6.1
ip-192-168-65-15.us-east-2.compute.internal    Ready    <none>   124d   v1.12.7   192.168.65.15    3.15.38.138     Amazon Linux 2   4.14.123-111.109.amzn2.x86_64   docker://18.6.1

Then you can deploy NATS and create a headless service named nats which will represent the NATS Server nodes:

kubectl get svc nats -o wide
NAME   TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                                                 AGE   SELECTOR
nats   ClusterIP   None         <none>        4222/TCP,6222/TCP,8222/TCP,7777/TCP,7422/TCP,7522/TCP   36d   app=nats

Once deploying external-dns, you have to use a NodePort with something as follows to keep the nodes mapped by the external dns with the ones from the headless service:

apiVersion: v1
kind: Service
metadata:
  name: nats-nodeport
  labels:
    app: nats
  annotations:
    external-dns.alpha.kubernetes.io/hostname: nats.example.com
spec:
  type: NodePort
  selector:
    app: nats
  externalTrafficPolicy: Local
  ports:
  - name: client
    port: 4222
    nodePort: 30222 #  Arbitrary port to represent the external dns service, external-dns issue...
    targetPort: 4222  # NOTE: the NATS pods also use host ports

The external-dns process would be responsible of registering the public ips from the nodes to be serviced at nats.example.com.

@wallyqs wallyqs transferred this issue from nats-io/nats-server Jun 26, 2020
@vtomar01
Copy link
Author

vtomar01 commented Jun 29, 2020

thanks for the quick response. I have gone ahead with NLB and it works for me.

@wallyqs wallyqs closed this as completed Jul 18, 2020
@1arrow
Copy link

1arrow commented Oct 18, 2023

@vtomar01 I have the same situation, I have one cluster in eastus and another in westus. Both the region's vnets are connected with peering. I have a private load balancer added by k8s service for gateway ports and configured both to talk to each other. Experiencing 503 from Nats client when connected with an externally facing URL. Do you have any documentation on how you set up this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants