Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VerneMQ can't create a cluster automatically on Microsoft Azure Kubernetes Service(AKS) #52

Closed
nmatsui opened this issue May 17, 2018 · 3 comments

Comments

@nmatsui
Copy link
Contributor

nmatsui commented May 17, 2018

I tried to create a VerneMQ cluster using below yaml on Microsoft Azure Kubernetes Service (AKS). The pods ran successfully, but I could not get any cluster.

yaml file is like below:

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: vernemq
spec:
  serviceName: vernemq
  replicas: 3
  selector:
    matchLabels:
      app: vernemq
  template:
    metadata:
      labels:
        app: vernemq
    spec:
      containers:
      - name: vernemq
        image: erlio/docker-vernemq:1.3.1
        ports:
        - containerPort: 1883
          name: mqtt
        - containerPort: 8883
          name: mqtts
        - containerPort: 4369
          name: epmd
        - containerPort: 44053
          name: vmq
        - containerPort: 9100
        - containerPort: 9101
        - containerPort: 9102
        - containerPort: 9103
        - containerPort: 9104
        - containerPort: 9105
        - containerPort: 9106
        - containerPort: 9107
        - containerPort: 9108
        - containerPort: 9109
        env:
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
          value: "1"
        - name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
          value: "vernemq"
        - name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
          value: "default"
        - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
          value: "9100"
        - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
          value: "9109"
        - name: DOCKER_VERNEMQ_LISTENER__VMQ__CLUSTERING
          value: "0.0.0.0:44053"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__DEFAULT
          value: "0.0.0.0:8883"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__CAFILE
          value: "/etc/ssl/ca.crt"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__CERTFILE
          value: "/etc/ssl/server.crt"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__KEYFILE
          value: "/etc/ssl/server.key"
        # if mqtt client can't use TLSv1.2
        # - name: DOCKER_VERNEMQ_LISTENER__SSL__TLS_VERSION
        #   value: "tlsv1"
        - name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE
          value: "/etc/vernemq-passwd/vmq.passwd"
        volumeMounts:
        - mountPath: /etc/ssl
          name: vernemq-certifications
          readOnly: true
        - mountPath: /etc/vernemq-passwd
          name: vernemq-passwd
          readOnly: true
      volumes:
      - name: vernemq-certifications
        secret:
          secretName: vernemq-certifications
      - name: vernemq-passwd
        secret:
          secretName: vernemq-passwd
---
apiVersion: v1
kind: Service
metadata:
  name: vernemq
  labels:
    app: vernemq
spec:
  clusterIP: None
  selector:
    app: vernemq
  ports:
  - port: 4369
    name: empd
---
apiVersion: v1
kind: Service
metadata:
  name: mqtt
  labels:
    app: mqtt
spec:
  type: ClusterIP
  selector:
    app: vernemq
  ports:
  - port: 1883
    name: mqtt
---
apiVersion: v1
kind: Service
metadata:
  name: mqtts
  labels:
    app: mqtts
spec:
  type: LoadBalancer
  selector:
    app: vernemq
  ports:
  - port: 8883
    name: mqtts

Kubernetes v1.9.6 is running on AKS.

$ kubectl get nodes
NAME                       STATUS    ROLES     AGE       VERSION
aks-nodepool1-83898320-0   Ready     agent     2h        v1.9.6
aks-nodepool1-83898320-1   Ready     agent     2h        v1.9.6
aks-nodepool1-83898320-2   Ready     agent     2h        v1.9.6
aks-nodepool1-83898320-3   Ready     agent     2h        v1.9.6

The Pods started & ran successfully.

$ kubectl get pods -l app=vernemq
NAME        READY     STATUS    RESTARTS   AGE
vernemq-0   1/1       Running   0          1m
vernemq-1   1/1       Running   0          1m
vernemq-2   1/1       Running   0          1m

I could not get any cluster.

$ kubectl exec vernemq-0 -- vmq-admin cluster show
Node 'VerneMQ@vernemq-0..default.svc.cluster.local' not responding to pings.
command terminated with exit code 1

The Pod's log says that SSL certification error occurred.

$ kubectl logs vernemq-0
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (51) SSL: certificate subject name 'client' does not match target host name 'kubernetes.default.svc.cluster.local'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (51) SSL: certificate subject name 'client' does not match target host name 'kubernetes.default.svc.cluster.local'
vernemq failed to start within 15 seconds,
see the output of 'vernemq console' for more information.
If you want to wait longer, set the environment variable
WAIT_FOR_ERLANG to the number of seconds to wait.
...
nmatsui added a commit to nmatsui/docker-vernemq that referenced this issue May 17, 2018
add '--insecure' option to curl command when retrieving kubernetes's info from 'https://kubernetes.default.svc.cluster.local/api/v1/namespaces/'
@nmatsui
Copy link
Contributor Author

nmatsui commented May 17, 2018

I sent a Pull Request #53 to resolve this issue.

I made a docker image included this pull request as 'nmatsui/docker-vernemq:debug_insecure_kubernetes_restapi'. And I got a VerneMQ cluster on AKS by deploying this container image instead of 'erlio/docker-vernemq:1.3.1'

$ kubectl get pods -l app=vernemq
NAME        READY     STATUS    RESTARTS   AGE
vernemq-0   1/1       Running   0          1m
vernemq-1   1/1       Running   0          1m
vernemq-2   1/1       Running   0          50s
$ kubectl logs vernemq-0
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  8157    0  8157    0     0   105k      0 --:--:-- --:--:-- --:--:--  106k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  8157    0  8157    0     0   109k      0 --:--:-- --:--:-- --:--:--  110k
2018-05-17 05:38:31.172 [info] <0.31.0> Application plumtree started on node 'VerneMQ@vernemq-0.vernemq.default.svc.cluster.local'
...
$ kubectl exec vernemq-0 -- vmq-admin cluster show
+---------------------------------------------------+-------+
|                       Node                        |Running|
+---------------------------------------------------+-------+
|VerneMQ@vernemq-0.vernemq.default.svc.cluster.local| true  |
|VerneMQ@vernemq-1.vernemq.default.svc.cluster.local| true  |
|VerneMQ@vernemq-2.vernemq.default.svc.cluster.local| true  |
+---------------------------------------------------+-------+

nmatsui added a commit to nmatsui/docker-vernemq that referenced this issue May 18, 2018
add '--insecure' option to curl command only if 'DOCKER_VERNEMQ_KUBERNETES_INSECURE' environment variable is set.
nmatsui added a commit to nmatsui/docker-vernemq that referenced this issue May 20, 2018
edit the comment & README
nmatsui added a commit to nmatsui/docker-vernemq that referenced this issue May 20, 2018
edit README
@ioolkos
Copy link
Contributor

ioolkos commented Jun 5, 2018

@dergraf, @larshesel I guess we did not review this PR yet?
apologies & thanks towards @nmatsui !

@dergraf
Copy link
Contributor

dergraf commented Jun 5, 2018

closing, thank you for your PR @nmatsui

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants