Skip to content

Multiple Nodes

Patrik Meijer edited this page May 19, 2017 · 9 revisions

To scale up a deployment and/or to increase robustness by removing the single point of failure (the webgme server), multiple webgme server nodes can run behind a reverse proxy.

In order to correctly propagate websocket events between clients connected to different webgme nodes, the adapter for must be set to redis (and a redis store must be accessible from all webgme nodes).

gmeConfig.socketIO.adapter.type = 'redis'; // default is 'memory'
gmeConfig.socketIO.adapter.options = {
  uri: 'redis://' // or where your redis store is listening



Blob storage

Technically you can run the server instances from the same directory, but in the general case you need to make sure that the blob-storage is shared among the webgme nodes.

  • For the 'FS' (file-system) storage the gmeConfig.blob.fsDir must point to the same location.
  • If using the 'S3' storage the gmeConfig.blob.s3.endpoint should all be the same.


If add-ons are enabled (by default they are not) you need to launch a shared addon_handler.js and configure the servers to post to that machine.

gmeConfig.addOn.workerUrl = '';

Reverse-proxy example

Below is a haproxy.config example where the front-end listens to 8888 and two webgme servers are listening on 8001 and 8002 respectively.

frontend loadbalancer
  default_backend webgme  

  mode http
  stats enable
  stats refresh 1s
  stats show-node
  stats auth admin:admin
  stats uri /haproxy?stats

backend webgme
  mode http
  balance source
  cookie server-id insert
  server webgme1 check cookie webgme1
  server webgme2 check cookie webgme2
You can’t perform that action at this time.