Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to add new cluster ? #34

Closed
berrywira opened this issue Aug 12, 2015 · 2 comments
Closed

how to add new cluster ? #34

berrywira opened this issue Aug 12, 2015 · 2 comments

Comments

@berrywira
Copy link

Hi there,

I was test run VerneMQ with 3 cluster,when run " vmq-admin cluster join discovery-node="
The node new cluster, why must restart service?

And, when client not send any message in period 5 minutes, service is stalled. why I says is stalled, because client cannot send any message again. please help me for step by step for join the cluster?

Thanks

@dergraf
Copy link
Contributor

dergraf commented Aug 12, 2015

Hi there!

Thanks for reporting!
Regarding clustering I recommend to read https://vernemq.com/docs/clustering.html. Especially the sections about node names and firewalls.

This could be due to multiple problems, but restarting the service is definitely not needed for joining a cluster. You're making a mistake there. What is vmq-admin cluster status telling you? Have you checked the /var/log/vernemq/console.log?
Regarding your second problem, are you sure that it isn't the keepalive canceling the connections?

We've encountered two different issues that may cause problems during cluster setup

Problem 1: Change of node names

It is important that all of your VerneMQ nodes have a unique node name set in vernemq.conf before trying to cluster. Once the node is part of the cluster you should not change it's nodename, otherwise you will run into a problem where your cluster is not ready to serve requests anymore (unless you're explicitly telling it to do so.. but that's another topic). Changing node names once a node was part of a cluster is currently a bad idea, don't do it.

Two options exist:

  • 1st Option (Good):
    • Shutdown the faulty node
    • Login to other machine and execute vmq-admin cluster leave node=<<faultynode>>
    • Change the node name in vernemq.conf
    • Startup the node
    • Make the node part of the cluster using vmq-admin cluster join discovery-node=<<othernode>>
  • 2nd Option (Bad):
    • Shutdown every cluster node and delete the /var/lib/vernemq/meta/peer_service/cluster_state file
    • Set proper node names in the vernemq.conf on every machine
    • Restart all nodes
    • go on with joining the cluster using vmq-admin cluster join discovery-node=<<othernode>>

Also a single running node is running in a cluster, namely a single node cluster. Everything said here also applies if you want to change e.g. the name of such a single node. In this case however only the 2nd option currently works. This case is ugly and needs to be improved!

Problem 2: Wrong Listener Setup

We've seen a case, especially when run in a virtualized environment, that binding to the 0.0.0.0 interface is a bad idea, as this interface is somehow shared across multiple instances. However, this is currently the default for the vmq listener used for the cluster communication. You may want to try to set the listener.vmq.clustering to a specific interface.

Please let me know if this helps!

@berrywira
Copy link
Author

Hi,

Sorry, so late give feedback to you.
I follow your suggestion. Now, all clustering running and awesome.
My problem is wrong listener cluster.
Thanks for helping me.

dergraf pushed a commit that referenced this issue Nov 18, 2016
lost QoS>=1 messages if message is enqueued in session termination window
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants