MongoDB Connector for Hadoop
The MongoDB Connector for Hadoop is a library which allows MongoDB (or backup files in its data format, BSON) to be used as an input source, or output destination, for Hadoop MapReduce tasks. It is designed to allow greater flexibility and performance and make it easy to integrate data in MongoDB with other parts of the Hadoop ecosystem.
Current stable release: 1.3.1
- Can create data splits to read from standalone, replica set, or sharded configurations
- Source data can be filtered with queries using the MongoDB query language
- Supports Hadoop Streaming, to allow job code to be written in any language (python, ruby, nodejs currently supported)
- Can read data from MongoDB backup files residing on S3, HDFS, or local filesystems
- Can write data out in .bson format, which can then be imported to any MongoDB database with
- Works with BSON/MongoDB documents in other Hadoop tools such as Pig and Hive.
See the release page.
./gradlew jar to build the jars. The jars will be placed in to
build/libs for each module. e.g. for the core module,
it will be generated in the
After successfully building, you must copy the jars to the lib directory on each node in your hadoop cluster. This is usually one of the following locations, depending on which Hadoop release you are using:
mongo-hadoop should work on any distribution of hadoop. Should you run in to an issue, please file a Jira ticket.
Usage with static .bson (mongo backup) files
Usage with Amazon Elastic MapReduce
Amazon Elastic MapReduce is a managed Hadoop framework that allows you to submit jobs to a cluster of customizable size and configuration, without needing to deal with provisioning nodes and installing software.
Using EMR with the MongoDB Connector for Hadoop allows you to run MapReduce jobs against MongoDB backup files stored in S3.
Submitting jobs using the MongoDB Connector for Hadoop to EMR simply requires that the bootstrap actions fetch the dependencies (mongoDB
java driver, mongo-hadoop-core libs, etc.) and place them into the hadoop distributions
For a full example (running the enron example on Elastic MapReduce) please see here.
Usage with Pig
For examples on using Pig with the MongoDB Connector for Hadoop, also refer to the examples section.
Notes for Contributors
If your code introduces new features, add tests that cover them if possible and make sure that
./gradlew check still passes.
If you're not sure how to write a test for a feature or have trouble with a test failure, please post on the google-groups with details
and we will try to help. Note: Until findbugs updates its dependencies, running
./gradlew check on Java 8 will fail.
Justin Lee (firstname.lastname@example.org)
- Mike O'Brien (email@example.com)
- Brendan McAdams firstname.lastname@example.org
- Eliot Horowitz email@example.com
- Ryan Nitz firstname.lastname@example.org
- Russell Jurney (@rjurney) (Lots of significant Pig improvements)
- Sarthak Dudhara email@example.com (BSONWritable comparable interface)
- Priya Manda firstname.lastname@example.org (Test Harness Code)
- Rushin Shah email@example.com (Test Harness Code)
- Joseph Shraibman firstname.lastname@example.org (Sharded Input Splits)
- Sumin Xia email@example.com (Sharded Input Splits)
- Jeremy Karn
- Ross Lawley
- Carsten Hufe
- Asya Kamsky
- Thomas Millar
Issue tracking: https://jira.mongodb.org/browse/HADOOP/