A dataset migration tool from MongoDB to Elasticsearch and vice versa.
Java
Latest commit 4605554 Jan 13, 2017 @ozlerhakan committed with hotfix
Invalid signature file digest for Manifest

README.adoc

Mongolastic

Build Status license MIT blue mongo.java.driver 3.4.1 brightgreen elastic.java.driver 5.1.1 brightgreen

Mongolastic enables you to migrate your datasets from a mongod node to an elasticsearch node and vice versa. Since mongo and elastic servers can run with different characteristics, the tool provides several optional and required features to ably connect them. Mongolastic works with a yaml or json configuration file to begin a migration process. It reads your demand on the file and start syncing data in the specified direction.

How it works

First, you can either pull the corresponding image of the app from Docker Hub

Supported tags and respective Dockerfile links:

or download the latest mongolastic.jar file.

Second, create a yaml or json file which must contain the following structure:

misc:
    dindex:
        name: <string>      (1)
        as: <string>        (2)
    ctype:
        name: <string>      (3)
        as: <string>        (4)
    direction: (em | me)    (5)
    batch: <number>         (6)
    dropDataset: <bool>     (7)
mongo:
    host: <ip-address>      (8)
    port: <number>          (9)
    query: "mongo-query"    (10)
    auth:                   (11)
        user: <string>
        pwd: "password"
        source: <db-name>
        mechanism: ( plain | scram-sha-1 | x509 | gssapi | cr )
elastic:
    host: <ip-address>     (12)
    port: <number>         (13)
    dateFormat: "<format>" (14)
    longToString: <bool>   (15)
    clusterName: <string>  (16)
    auth:                  (17)
        user: <string>
        pwd: "password"
  1. the database/index name to connect to.

  2. another database/index name in which documents will be located in the target service (Optional)

  3. the collection/type name to export.

  4. another collection/type name in which indexed/collected documents will reside in the target service (Optional)

  5. direction of the data transfer. the default direction is me (that is, mongo to elasticsearch). You can skip this option if your data move from mongo to es.

  6. Override the default batch size which is normally 200. (Optional)

  7. configures whether or not the target table should be dropped prior to loading data. Default value is true (Optional)

  8. the name of the host machine where the mongod is running.

  9. the port where the mongod instance is listening.

  10. data will be transferred based on a json mongodb query (Optional)

  11. as of v1.3.5, you can access an auth mongodb by giving auth configuration. (Optional)

  12. the name of the host machine where the elastic node is running.

  13. the transport port where the transport module will communicate with the running elastic node. E.g. 9300 for node-to-node communication.

  14. a custom formatter for Date fields rather than the default DateCodec (Optional)

  15. serialize long value as a string for backwards compatibility with other tools (Optional)

  16. connect to a spesific elastic cluster (Optional)

  17. as of v1.3.9, you can access an auth elastic search by giving auth configuration. (Optional)


Alternatively, you can use a JSON file.

this is the same JSON schema of the yaml file above:
{
	"misc": {
		"dindex": {
			"name": "db-index-name",
			"as": "different-db-index-name"
		},
		"ctype": {
			"name": "collection-type-name",
			"as": "different-coll-type-name"
		},
		"direction": "em | me",
		"batch": 1234,
		"dropDataset": bool
	},
	"mongo": {
		"host": "127.0.0.1",
		"port": 27017,
		"query": "{}",
		"auth": {
			"user": "user-name",
			"pwd": "password",
			"source": "db-name",
			"mechanism": "plain | scram-sha-1 | x509 | gssapi | cr"
		}
	},
	"elastic": {
		"host": "127.0.0.1",
		"port": 9300,
		"dateFormat": "<format>",
		"longToString": bool,
		"clusterName": "cluster-name",
		"auth": {
			"user": "user-name",
			"pwd": "password"
		}
	}
}

Example

The following files have the same confguration details:

yaml file
misc:
    dindex:
        name: twitter
        as: kodcu
    ctype:
        name: tweets
        as: posts
mongo:
    host: localhost
    port: 27017
    query: "{ 'user.name' : 'kodcu.com'}"
elastic:
    host: localhost
    port: 9300
json file
{
	"misc": {
		"dindex": {
			"name": "twitter",
			"as": "kodcu"
		},
		"ctype": {
			"name": "tweets",
			"as": "posts"
		}
	},
	"mongo": {
		"host": "localhost",
		"port": 27017,
		"query": "{ 'user.name' : 'kodcu.com'}"
	},
	"elastic": {
		"host": "localhost",
		"port": 9300
	}
}

the config says that the transfer direction is from mongodb to elasticsearch, mongolastic first looks at the tweets collection, where the user name is kodcu.com, of the twitter database located on a mongod server running on default host interface and port number. If It finds the corresponding data, It will start copying those into an elasticsearch environment running on default host and transport number. After all, you should see a type called "posts" in an index called "kodcu" in the current elastic node. Why the index and type are different is because "dindex.as" and "ctype.as" options were set, these indicates that your data being transferred exist in posts type of the kodcu index.

After downloading the jar or pulling the image and providing a conf file, you can either run the tool as:

$ java -jar mongolastic.jar -f config.file

or

$ docker run --rm -v $(PWD)/config.file:/config.file --net host ozlerhakan/mongolastic:<tag> config.file
Note
Every attempt of running the tool drops the mentioned db/index in the target environment unless the dropDataset parameter is configured otherwise.

License

Mongolastic is released under MIT.