Skip to content

Multiple Host HOWTO

Andy Wick edited this page May 13, 2018 · 25 revisions

Going from a single host to a multiple host deployment isn't too difficult. Basically you just install the Moloch deb/rpm on each machine and point to the same Elasticsearch cluster. The biggest issues are around opening up Elasticsearch to more than just localhost and getting the Elasticsearch configuration right. These instructions assume you've installed from the prebuilt deb/rpm and everything is in /data/moloch

Expanding Elasticsearch

If using the demo install you should move Elasticsearch to multiple machines if you plan to have a large Moloch cluster. We no longer provide detailed instructions since Elastic now has lots of good tutorials. If running on dedicated machines give up to 1/2 of physical memory (up to 30G) for Elasticsearch. You can read more about how many nodes in the FAQ.

At a high level you will want to

  • Change your current cluster from listening on 127.0.0.1 (localhost) to 0.0.0.0
  • Add more Elasticsearch nodes to the cluster
  • Mark the old demo node as ignore
  • Wait for all the shards to move to new nodes
  • Shutdown the old demo node.
  • Setup iptables on the Elasticsearch machines, since by default there is NO protection.

Make sure you set gateway.recover_after_nodes and gateway.expected_nodes to the total number of DATA nodes.

Capture/Viewer nodes

Adding multiple capture nodes is easy, just install the prebuilt deb/rpm package on each machine. It is best to use a system like ansible or chef so you can use the same config.ini file every where and push out to each of the sensors. As long as all the capture/viewer nodes talk to the same Elasticsearch cluster they will show up in the same UI.

If you set up multiple Elasticsearch clusters for multiple moloch clusters you can merge the results by using a multiviewer.

You can’t perform that action at this time.