Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VerneMQ can't create a cluster automatically on Amazon Elastic Kubernetes Service(EKS) #64

Closed
chouhan opened this issue Aug 24, 2018 · 7 comments

Comments

@chouhan
Copy link

chouhan commented Aug 24, 2018

PLEASE DO NOT MARK THIS AS DUPLICATE AND CLOSE IT, UNLESS THERE IS A COMPLETE SOLUTION TO THIS PROBLEM

Followed the write up from https://github.com/nmatsui/kubernetes-vernemq and followed the issue resolution from #52. In this case, it's not a SSL issue. Trying to do the exact same thing on Amazon EKS.

Here is the YAML file

--- 
apiVersion: apps/v1
kind: StatefulSet
metadata: 
  name: vernemq
spec: 
  replicas: 3
  selector: 
    matchLabels: 
      app: vernemq
  serviceName: vernemq
  template: 
    metadata: 
      labels: 
        app: vernemq
    spec:
      containers:
      - name: vernemq
        image: erlio/docker-vernemq
        # image: nmatsui/docker-vernemq:debug_insecure_kubernetes_restapi
        imagePullPolicy: Always
        # Just spin & wait forever
        command: [ "/bin/bash", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]
        ports:
        - containerPort: 1883
          name: mqtt
        - containerPort: 8883
          name: mqtts
        - containerPort: 4369
          name: epmd
        - containerPort: 44053
          name: vmq
        - containerPort: 9100
        - containerPort: 9101
        - containerPort: 9102
        - containerPort: 9103
        - containerPort: 9104
        - containerPort: 9105
        - containerPort: 9106
        - containerPort: 9107
        - containerPort: 9108
        - containerPort: 9109
        env:
        - name: MY_POD_NAME
          valueFrom:
           fieldRef:
             fieldPath: metadata.name
        - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
          value: "1"
        - name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
          value: "vernemq"
        - name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
          value: "default"
        - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
          value: "9100"
        - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
          value: "9109"
        - name: DOCKER_VERNEMQ_LISTENER__VMQ__CLUSTERING
          value: "0.0.0.0:44053"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__DEFAULT
          value: "0.0.0.0:8883"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__CAFILE
          value: "/etc/ssl/ca.crt"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__CERTFILE
          value: "/etc/ssl/server.crt"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__KEYFILE
          value: "/etc/ssl/server.key"
        - name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE
          value: "/etc/vernemq-passwd/vmq.passwd"
        volumeMounts:
        - mountPath: /etc/ssl
          name: vernemq-certifications
          readOnly: true
        - mountPath: /etc/vernemq-passwd
          name: vernemq-passwd
          readOnly: true
      volumes:
      - name: vernemq-certifications
        secret:
          secretName: vernemq-certifications
      - name: vernemq-passwd
        secret:
          secretName: vernemq-passwd
---
apiVersion: v1
kind: Service
metadata:
  name: vernemq
  labels:
    app: vernemq
spec:
  clusterIP: None
  selector:
    app: vernemq
  ports:
  - port: 4369
    name: empd
---
apiVersion: v1
kind: Service
metadata:
  name: mqtt
  labels:
    app: mqtt
spec:
  type: LoadBalancer
  selector:
    app: vernemq
  ports:
  - port: 1883
    name: mqtt
---
apiVersion: v1
kind: Service
metadata:
  name: mqtts
  labels:
    app: mqtts
spec:
  type: LoadBalancer
  selector:
    app: vernemq
  ports:
  - port: 8883
    name: mqtts

Kubectl version

>kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}

Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-28T20:13:43Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
>kubectl get nodes

NAME                                           STATUS    ROLES     AGE       VERSION
ip-192-168-138-81.us-west-2.compute.internal   Ready     <none>    6d        v1.10.3
ip-192-168-233-48.us-west-2.compute.internal   Ready     <none>    6d        v1.10.3
ip-192-168-96-199.us-west-2.compute.internal   Ready     <none>    6d        v1.10.3

The Pods started & ran successfully.

>kubectl get pods -l app=vernemq

NAME        READY     STATUS    RESTARTS   AGE
vernemq-0   1/1       Running   0          5h
vernemq-1   1/1       Running   0          5h
vernemq-2   1/1       Running   0          5h

I could not get any cluster.

>kubectl exec vernemq-0 -- vmq-admin cluster show

Node 'VerneMQ@127.0.0.1' not responding to pings.
command terminated with exit code 1

The Pod's log say nothing.

>kubectl logs vernemq-0

@larshesel
Copy link
Contributor

Hi, what does the vernemq logs themselves contain?

@dergraf
Copy link
Contributor

dergraf commented Aug 24, 2018

Something is wrong with the VerneMQ nodename configuration. It should be something like VerneMQ@<YourPodName> which is automatically configured.

Moreover the config DOCKER_VERNEMQ_LISTENER__VMQ__CLUSTERING shouldn't be used as this is also autodiscovered in an Kubernetes environment.

@codeadict
Copy link
Contributor

codeadict commented Aug 24, 2018

Yep, I will recommend not using 0.0.0.0 for the clustering interface. Also, I would comment command and args, plus get the namespace automatically like:

        - name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
          valueFrom:
            fieldRef:
               fieldPath: metadata.namespace

Also, on the same service that is exposing epmd you need to expose the cluster interface i think:

apiVersion: v1
kind: Service
metadata:
  name: vernemq
  labels:
    app: vernemq
spec:
  clusterIP: None
  selector:
    app: vernemq
  ports:
  - port: 4369
    name: empd
  - port: 44053
    name: vmq

@codeadict
Copy link
Contributor

Plus, you need to define a role that allows the MQTT entrypoint to access the kubes API, like:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vernemq
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: endpoint-reader
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["endpoints", "deployments", "replicasets", "pods"]
  verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: endpoint-reader
subjects:
- kind: ServiceAccount
  name: vernemq
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: endpoint-reader

and after the spec directive on the StatefulSet definition add:

serviceAccountName: vernemq

Hope this works 🌮

@chouhan
Copy link
Author

chouhan commented Aug 24, 2018

First off, really appreciate all of your responses.

Based on your suggestions, I updated the YAML file and here is the updated one. I am still not able to see the cluster info.

--- 
apiVersion: apps/v1
kind: StatefulSet
metadata: 
  name: vernemq
spec: 
  replicas: 3
  selector: 
    matchLabels: 
      app: vernemq
  serviceName: vernemq
  template: 
    metadata: 
      labels: 
        app: vernemq
    spec:
      serviceAccountName: vernemq
      containers:
      - name: vernemq
        image: erlio/docker-vernemq:latest
        # image: index.docker.io/chouhan/mqtt-docker:latest
        # image: nmatsui/docker-vernemq:debug_insecure_kubernetes_restapi
        imagePullPolicy: Always
        lifecycle:
          preStop:
            exec:
              command:
              - /bin/bash
              - -c
              - /usr/sbin/vmq-admin cluster leave node=VerneMQ@${MY_POD_NAME}.vernemq.${DOCKER_VERNEMQ_KUBERNETES_NAMESPACE}.svc.cluster.local ; sleep 1 ; /usr/sbin/vmq-admin cluster leave node=VerneMQ@${MY_POD_NAME}.vernemq.${DOCKER_VERNEMQ_KUBERNETES_NAMESPACE}.svc.cluster.local -k
        # Just spin & wait forever
        # command: [ "/bin/bash", "-c", "--" ]
        # args: [ "while true; do sleep 30; done;" ]
        ports:
        - containerPort: 1883
          name: mqtt
        - containerPort: 8883
          name: mqtts
        - containerPort: 4369
          name: epmd
        - containerPort: 44053
          name: vmq
        - containerPort: 9100
        - containerPort: 9101
        - containerPort: 9102
        - containerPort: 9103
        - containerPort: 9104
        - containerPort: 9105
        - containerPort: 9106
        - containerPort: 9107
        - containerPort: 9108
        - containerPort: 9109
        env:
        - name: MY_POD_NAME
          valueFrom:
           fieldRef:
             fieldPath: metadata.name
        - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
          value: "1"
        - name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
          value: "vernemq"
        - name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
          valueFrom:
           fieldRef:
             fieldPath: metadata.name
        - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
          value: "9100"
        - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
          value: "9109"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__DEFAULT
          value: "0.0.0.0:8883"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__CAFILE
          value: "/etc/ssl/ca.crt"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__CERTFILE
          value: "/etc/ssl/server.crt"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__KEYFILE
          value: "/etc/ssl/server.key"
        - name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE
          value: "/etc/vernemq-passwd/vmq.passwd"
        volumeMounts:
        - mountPath: /etc/ssl
          name: vernemq-certifications
          readOnly: true
        - mountPath: /etc/vernemq-passwd
          name: vernemq-passwd
          readOnly: true
      volumes:
      - name: vernemq-certifications
        secret:
          secretName: vernemq-certifications
      - name: vernemq-passwd
        secret:
          secretName: vernemq-passwd
---
apiVersion: v1
kind: Service
metadata:
  name: vernemq
  labels:
    app: vernemq
spec:
  clusterIP: None
  selector:
    app: vernemq
  ports:
  - port: 4369
    name: empd
  - port: 44053
    name: vmq
---
apiVersion: v1
kind: Service
metadata:
  name: mqtt
  labels:
    app: mqtt
spec:
  type: LoadBalancer
  selector:
    app: vernemq
  ports:
  - port: 1883
    name: mqtt
---
apiVersion: v1
kind: Service
metadata:
  name: mqtts
  labels:
    app: mqtts
spec:
  type: LoadBalancer
  selector:
    app: vernemq
  ports:
  - port: 8883
    name: mqtts
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vernemq
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: endpoint-reader
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["endpoints", "deployments", "replicasets", "pods"]
  verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: endpoint-reader
subjects:
- kind: ServiceAccount
  name: vernemq
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: endpoint-reader

Here is the cluster status

>kubectl exec vernemq-0 -- vmq-admin cluster show
Node 'VerneMQ@vernemq-0.vernemq-0.svc.cluster.local' not responding to pings.
command terminated with exit code 1

Here is the logs for POD vernemq-0 and its almost similar (except for POD names) for vernemq-1 & vernemq-2

>kubectl logs vernemq-0
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   302  100   302    0     0  10313      0 --:--:-- --:--:-- --:--:-- 10413
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   302  100   302    0     0  15253      0 --:--:-- --:--:-- --:--:-- 15894
jq: error: Cannot iterate over null
config is OK
-config /var/lib/vernemq/generated.configs/app.2018.08.24.15.08.55.config -args_file /etc/vernemq/vm.args -vm_args /etc/vernemq/vm.args
Exec:  /usr/lib/vernemq/erts-8.3.5.3/bin/erlexec -boot /usr/lib/vernemq/releases/1.5.0/vernemq               -config /var/lib/vernemq/generated.configs/app.2018.08.24.15.08.55.config -args_file /etc/vernemq/vm.args -vm_args /etc/vernemq/vm.args              -pa /usr/lib/vernemq/lib/erlio-patches -- console -noshell -noinput
Root: /usr/lib/vernemq
15:08:56.211 [info] Application lager started on node 'VerneMQ@vernemq-0.vernemq-0.svc.cluster.local'
15:08:56.214 [info] Application vmq_plugin started on node 'VerneMQ@vernemq-0.vernemq-0.svc.cluster.local'
15:08:56.214 [info] Application ssl_verify_fun started on node 'VerneMQ@vernemq-0.vernemq-0.svc.cluster.local'
15:08:56.215 [info] Application epgsql started on node 'VerneMQ@vernemq-0.vernemq-0.svc.cluster.local'
15:08:56.232 [info] writing state {[{[{actor,<<207,79,190,19,219,150,11,151,107,154,29,21,203,236,26,109,192,21,0,149>>}],1}],{dict,1,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[['VerneMQ@vernemq-0.vernemq-0.svc.cluster.local',{[{actor,<<207,79,190,19,219,150,11,151,107,154,29,21,203,236,26,109,192,21,0,149>>}],1}]],[],[],[]}}},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}}} to disk <<75,2,131,80,0,0,1,19,120,1,203,96,206,97,96,96,96,204,96,130,82,41,12,172,137,201,37,249,69,185,64,81,145,243,254,251,132,111,79,227,158,158,61,75,86,244,244,27,169,220,3,162,12,83,179,18,25,179,50,56,83,24,88,82,50,147,75,18,25,19,5,128,144,35,49,32,209,32,67,32,11,13,100,48,162,138,129,173,0,17,76,41,12,186,97,169,69,121,169,190,129,14,101,32,58,183,80,215,64,15,193,42,46,75,214,75,206,41,45,46,73,45,210,203,201,79,78,204,33,205,121,32,103,32,156,200,64,138,19,65,90,1,243,147,87,169>>
15:08:56.253 [info] Datadir /var/lib/vernemq/meta/meta/0 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,49753693}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
15:08:56.261 [info] Datadir /var/lib/vernemq/meta/meta/1 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,41204167}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
15:08:56.269 [info] Datadir /var/lib/vernemq/meta/meta/2 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,49994532}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
15:08:56.276 [info] Datadir /var/lib/vernemq/meta/meta/3 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,41594423}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
15:08:56.284 [info] Datadir /var/lib/vernemq/meta/meta/4 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,32105734}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
15:08:56.291 [info] Datadir /var/lib/vernemq/meta/meta/5 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,34996423}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
15:08:56.299 [info] Datadir /var/lib/vernemq/meta/meta/6 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,46307568}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
15:08:56.306 [info] Datadir /var/lib/vernemq/meta/meta/7 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,41998637}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
15:08:56.314 [info] Datadir /var/lib/vernemq/meta/meta/8 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,39684172}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
15:08:56.321 [info] Datadir /var/lib/vernemq/meta/meta/9 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,48185467}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
15:08:56.329 [info] Datadir /var/lib/vernemq/meta/meta/10 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,48796060}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
15:08:56.359 [info] Datadir /var/lib/vernemq/meta/meta/11 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,41604386}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
15:08:56.401 [info] Application plumtree started on node 'VerneMQ@vernemq-0.vernemq-0.svc.cluster.local'
15:08:56.421 [info] Application hackney started on node 'VerneMQ@vernemq-0.vernemq-0.svc.cluster.local'
15:08:56.456 [info] Application inets started on node 'VerneMQ@vernemq-0.vernemq-0.svc.cluster.local'
15:08:56.456 [info] Application xmerl started on node 'VerneMQ@vernemq-0.vernemq-0.svc.cluster.local'
15:08:56.513 [info] Application vmq_plumtree started on node 'VerneMQ@vernemq-0.vernemq-0.svc.cluster.local'
15:08:56.529 [info] Try to start vmq_plumtree: ok
15:08:56.902 [info] loaded 0 subscriptions into vmq_reg_trie
15:08:56.914 [info] cluster event handler 'vmq_cluster' registered
15:08:57.518 [warning] lager_error_logger_h dropped 8 messages in the last second that exceeded the limit of 100 messages/sec
15:08:57.518 [info] Application vmq_acl started on node 'VerneMQ@vernemq-0.vernemq-0.svc.cluster.local'
15:08:57.577 [info] Application vmq_passwd started on node 'VerneMQ@vernemq-0.vernemq-0.svc.cluster.local'
15:08:57.684 [info] Application vmq_server started on node 'VerneMQ@vernemq-0.vernemq-0.svc.cluster.local'

Some more additional information

>kubectl exec -n platform vernemq-0 -- bash -c 'curl -s -X GET --insecure --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods?labelSelector=app=vernemq -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"' ; echo
Error from server (NotFound): pods "vernemq-0" not found
>kubectl get nodes
NAME                                           STATUS    ROLES     AGE       VERSION
ip-192-168-138-81.us-west-2.compute.internal   Ready     <none>    6d        v1.10.3
ip-192-168-233-48.us-west-2.compute.internal   Ready     <none>    6d        v1.10.3
ip-192-168-96-199.us-west-2.compute.internal   Ready     <none>    6d        v1.10.3
>kubectl run -i --tty --image busybox dns-test --restart=Never --rm /bin/sh
If you don't see a command prompt, try pressing enter.
/ # nslookup vernemq-0
Server:         10.100.0.10
Address:        10.100.0.10:53

** server can't find vernemq-0: NXDOMAIN

*** Can't find vernemq-0: No answer

/ # exit

>kubectl -n platform exec vernemq-0 -- nslookup kubernetes.default.svc.cluster.local
Error from server (NotFound): pods "vernemq-0" not found

>kubectl get pods -w -l app=vernemq
NAME        READY     STATUS    RESTARTS   AGE
vernemq-0   1/1       Running   0          16m
vernemq-1   1/1       Running   0          16m
vernemq-2   1/1       Running   0          16m

@codeadict
Copy link
Contributor

codeadict commented Aug 24, 2018

Your namespace param is still wrong, it should be as I posted above:

     - name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
          valueFrom:
            fieldRef:
               fieldPath: metadata.namespace

That's why you are getting Node 'VerneMQ@vernemq-0.vernemq-0.svc.cluster.local' not responding to pings. Note the double vernemq-0 because you are using fieldPath: metadata.name

@chouhan
Copy link
Author

chouhan commented Aug 24, 2018

@codeadict - Thanks a ton. That works now. I can connect to both mqtt and mqtts. I would really appreciate if I can get any tips on deploying or using this as a highly available production grade VerneMQ instances for a low latency Device Handshakes and Message Deliveries without failure. I do understand that VerneMQ just does that, but wanted to know if there are any other things that I need to take care for a Real World Scenario.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants