-
Notifications
You must be signed in to change notification settings - Fork 304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NATS supercluster in kubernetes #78
Comments
(fyi transferring this to the
apiVersion: v1
kind: Service
metadata:
name: nats-nlb-gw
namespace: default
labels:
app: nats
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: nats
port: 7522
protocol: TCP
targetPort: 7522
selector:
app: nats
# Create 3 nodes Kubernetes cluster
eksctl create cluster --name nats-k8s-cluster \
--nodes 3 \
--node-type=t3.large \
--region=eu-west-1
# Get the credentials for your cluster
eksctl utils write-kubeconfig --name $YOUR_EKS_NAME --region eu-west-1 After that is done you get a set of 3 nodes with the example above: kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-192-168-10-213.us-east-2.compute.internal Ready <none> 124d v1.12.7 192.168.10.213 3.17.184.16 Amazon Linux 2 4.14.123-111.109.amzn2.x86_64 docker://18.6.1
ip-192-168-45-209.us-east-2.compute.internal Ready <none> 124d v1.12.7 192.168.45.209 18.218.52.122 Amazon Linux 2 4.14.123-111.109.amzn2.x86_64 docker://18.6.1
ip-192-168-65-15.us-east-2.compute.internal Ready <none> 124d v1.12.7 192.168.65.15 3.15.38.138 Amazon Linux 2 4.14.123-111.109.amzn2.x86_64 docker://18.6.1 Then you can deploy NATS and create a headless service named kubectl get svc nats -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
nats ClusterIP None <none> 4222/TCP,6222/TCP,8222/TCP,7777/TCP,7422/TCP,7522/TCP 36d app=nats Once deploying apiVersion: v1
kind: Service
metadata:
name: nats-nodeport
labels:
app: nats
annotations:
external-dns.alpha.kubernetes.io/hostname: nats.example.com
spec:
type: NodePort
selector:
app: nats
externalTrafficPolicy: Local
ports:
- name: client
port: 4222
nodePort: 30222 # Arbitrary port to represent the external dns service, external-dns issue...
targetPort: 4222 # NOTE: the NATS pods also use host ports The |
thanks for the quick response. I have gone ahead with NLB and it works for me. |
@vtomar01 I have the same situation, I have one cluster in eastus and another in westus. Both the region's vnets are connected with peering. I have a private load balancer added by k8s service for gateway ports and configured both to talk to each other. Experiencing 503 from Nats client when connected with an externally facing URL. Do you have any documentation on how you set up this? |
I am using this helm chart to deploy NATS cluster in AWS.
https://github.com/nats-io/k8s/tree/master/helm/charts/nats
I have created 2 cluster, one in AWS Singapore region and another in AWS Sydney region.
Each of the cluster has 3 nodes and uses the statefulset so that cluster routes can be hardcoded.
I need to set up communication between these 2 cluster using gateways.
What are the suggestion to set up gateway URLs ? We can't have static IPs/DNS per pod because our K8s set up is very elastic with auto-scaling enabled so nodes can come up and go down.
I have set up a kubernetes Service of type AWS Loadbalancer to target gateway port and using this loadbalancer DNS as gateway URLs. This seems to work but I see that connection getting dropped repeatedly with this message in NATS pod logs
'Gateway connection closed'.
What are the best practices to form super clusters in kubernetes elastic environments?
The text was updated successfully, but these errors were encountered: