KubeCon 2020: NATS Tutorial
NATS is a simple, secure and performant communications system for digital systems, services and devices. NATS is a hosted project in the Cloud Native Computing Foundation (CNCF). NATS has over 30 client language implementations, and the NATS Server can run on-premise, in the cloud, at the edge, and even on a Raspberry Pi. NATS can secure and simplify design and operation of modern distributed systems.
Learn how to build applications that span across more than one Kubernetes region by using a NATS based global communications network. In this talk, it will be covered how to setup a globally available NATS cluster using multiple Kubernetes regions using NATS gateways and leafnode connections, as well as how to create applications that take advantage of the NATS decentralized authorization model by showing how to implement a simple Slack-like clone that runs under your terminal.
You can watch the tutorial by clicking on the image below:
Follow along with this repo: https://github.com/wallyqs/kubecon2020
git clone https://github.com/wallyqs/kubecon2020
curl -LO https://raw.githubusercontent.com/nats-io/nsc/master/install.sh less install.sh sh ./install.sh
curl -fSl https://nats-io.github.io/k8s/setup/nsc-setup.sh | sh
tree nsc/ | less
nsc describe jwt -f nsc/accounts/nats/KO/KO.jwt
- We need 3 users at least:
- Chat Credentials Requestor
- Credentials Provisioner
- Chat User
- Will be dynamically generated
nsc add account --name KUBECON
nsc list accounts
nsc describe jwt -f ./nsc/accounts/nats/KO/accounts/KUBECON/KUBECON.jwt
This is needed to be able to create users dynamically by the credentials provisioner.
nsc generate nkey --account --store
nsc edit account --sk ACUIKNKJIAPABWJSIJM4GFYLQLL7RUWEBI2BIZYUINPWV5432ZOAEDV4
nsc describe jwt -f ./nsc/accounts/nats/KO/accounts/KUBECON/KUBECON.jwt
nsc add user chat-access \ -K $NKEYS_PATH/keys/A/AO/AAOEOFBQCJKEJ7XZLLSHKVCERH34OPZOIJMOUUVW7QKESQ2KT33JZDRI.nk \ --allow-sub 'chat.req.access' \ --allow-pubsub '_INBOX.>' \ --allow-pubsub '_R_' \ --allow-pubsub '_R_.>' nsc describe jwt -f $NKEYS_PATH/creds/KO/KUBECON/chat-access.creds
nsc add user chat-creds-request \ -K $NKEYS_PATH/keys/A/AO/AAOEOFBQCJKEJ7XZLLSHKVCERH34OPZOIJMOUUVW7QKESQ2KT33JZDRI.nk \ --allow-pub 'chat.req.access' \ --allow-pubsub '_INBOX.>' \ --allow-pubsub '_R_' \ --allow-pubsub '_R_.>' nsc describe jwt -f $NKEYS_PATH/creds/KO/KUBECON/chat-creds-request.creds
Generate the NATS configuration.
source .nsc.env
nsc list accounts
nsc generate config --mem-resolver --sys-account SYS
nsc generate config --mem-resolver --sys-account SYS > conf/resolver.conf
Start the NATS Server:
nats-server -c conf/resolver.conf
Try to make a request:
nats-req -creds nsc/nkeys/creds/KO/KUBECON/chat-creds-request.creds chat.req.access example
Create a mock responder:
nats-rply -creds nsc/nkeys/creds/KO/KUBECON/chat-access.creds chat.req.access example
cd chat-access go run main.go --acc $NSC_HOME/nats/KO/accounts/KUBECON/KUBECON.jwt \ --sk $NKEYS_PATH/keys/A/AO/AAOEOFBQCJKEJ7XZLLSHKVCERH34OPZOIJMOUUVW7QKESQ2KT33JZDRI.nk \ --creds $NKEYS_PATH/creds/KO/KUBECON/chat-access.creds
cd chat nats-req -creds nsc/nkeys/creds/KO/KUBECON/chat-creds-request.creds chat.req.access wallyqs 2> my.creds go build ./... ./chat -creds my.creds
You can find info here:
https://docs.nats.io/nats-on-kubernetes/super-cluster-on-digital-ocean
Let’s create 3 clusters in Digital Ocean:
doctl kubernetes cluster create nats-k8s-sfo2 --count 3 --region sfo2
doctl kubernetes cluster create nats-k8s-sgp1 --count 3 --region sgp1
doctl kubernetes cluster create nats-k8s-ams3 --count 3 --region ams3
- 4222 is the client port
- 7422 is the port for leafnodes
- 7522 is the port for gateway connections (cluster of clusters)
for firewall in `doctl compute firewall list | tail -n 3 | awk '{print $1}'`; do
doctl compute firewall add-rules $firewall --inbound-rules protocol:tcp,ports:4222,address:0.0.0.0/0
doctl compute firewall add-rules $firewall --inbound-rules protocol:tcp,ports:7422,address:0.0.0.0/0
doctl compute firewall add-rules $firewall --inbound-rules protocol:tcp,ports:7522,address:0.0.0.0/0
done
brew install helm helm repo add nats https://nats-io.github.io/k8s/helm/charts/ helm repo update
for ctx in do-ams3-nats-k8s-ams3 do-sfo2-nats-k8s-sfo2 do-sgp1-nats-k8s-sgp1; do
kubectl --context $ctx create cm nats-accounts --from-file conf/resolver.conf
# kubectl --context $ctx delete cm nats-accounts
done
Using explicit URL endpoints though we could use external-dns instead for this:
for ctx in do-ams3-nats-k8s-ams3 do-sgp1-nats-k8s-sgp1 do-sfo2-nats-k8s-sfo2; do
echo " - name: $ctx"
echo " urls:"
for externalIP in `kubectl --context $ctx get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'`; do
echo " - nats://$externalIP:7522";
done
echo
done
- name: do-ams3-nats-k8s-ams3 urls: - nats://164.90.192.194:7522 - nats://164.90.192.226:7522 - nats://164.90.192.80:7522 - name: do-sgp1-nats-k8s-sgp1 urls: - nats://188.166.236.158:7522 - nats://188.166.232.25:7522 - nats://188.166.236.155:7522 - name: do-sfo2-nats-k8s-sfo2 urls: - nats://64.227.50.254:7522 - nats://64.227.54.26:7522 - nats://138.197.219.203:7522
nats:
image: nats:alpine
# Bind a host port from the host for each one of the pods.
externalAccess: true
logging:
debug: false
trace: false
cluster:
enabled: true
auth:
enabled: true
resolver:
############################
# #
# Memory resolver settings #
# #
##############################
type: memory
#
# Use a configmap reference which will be mounted
# into the container.
#
configMap:
name: nats-accounts
key: resolver.conf
gateway:
enabled: true
# NOTE: defined via --set gateway.name="$ctx"
# name: $ctx
gateways:
- name: do-ams3-nats-k8s-ams3
urls:
- nats://164.90.192.194:7522
- nats://164.90.192.226:7522
- nats://164.90.192.80:7522
- name: do-sgp1-nats-k8s-sgp1
urls:
- nats://188.166.236.158:7522
- nats://188.166.232.25:7522
- nats://188.166.236.155:7522
- name: do-sfo2-nats-k8s-sfo2
urls:
- nats://64.227.50.254:7522
- nats://64.227.54.26:7522
- nats://138.197.219.203:7522
natsbox:
enabled: true
for ctx in do-ams3-nats-k8s-ams3 do-sfo2-nats-k8s-sfo2 do-sgp1-nats-k8s-sgp1; do
helm --kube-context $ctx install nats nats/nats -f conf/super-cluster.yaml --set gateway.name=$ctx
# helm --kube-context $ctx delete nats
done
- Peek at the connect_urls and confirm that the routes are present.
telnet 188.166.232.25 4222
Try to make a request from SF:
nats-req -s 138.197.219.203 -creds nsc/nkeys/creds/KO/KUBECON/chat-creds-request.creds chat.req.access example
Create a mock responder in AMS:
nats-rply -s 164.90.192.226 -creds nsc/nkeys/creds/KO/KUBECON/chat-access.creds chat.req.access example
nats-sub -s 188.166.236.158 -creds ./nsc/nkeys/creds/KO/SYS/sys.creds '>'