GlusterFS plugin for Hadoop HCFS
Java Other Shell
Latest commit f0fee73 Mar 31, 2015 @childsb childsb Merge pull request #122 from childsb/getfattrparse
Refactor and cleanup the BlockLocation parsing code
Failed to load latest commit information.
conf Revert core-site.xml to the original. Mar 27, 2015
resources Eclipse formatter settings (feel free to customize, change, reformat,… Mar 26, 2013
src Merge pull request #122 from childsb/getfattrparse Mar 30, 2015
tools incremental test case updates Feb 20, 2013
20_glusterfs_hadoop_sudoers added sudoers file and del non-template spec file Feb 5, 2014
COPYING Refactor - move glusterfs-hadoop/* from gluster core repo, update sta… Feb 13, 2013
README Update to readme Jan 23, 2014 added contribution text Mar 6, 2013
glusterfs-hadoop.spec.tmpl Modified to generate javadoc Mar 5, 2015
pom.xml Updates and changes required by sonatype to publish/snapshot. Jan 21, 2015
pom.xml-1.2.1 apache 1.2.1 pom dep Nov 12, 2013 * fix in core-site.xml property Apr 2, 2014


GlusterFS Hadoop Plugin


This document describes how to use GlusterFS ( as a backing store with Hadoop.

This plugin replaces the hadoop file system (typically, the Hadoop Distributed File System) with the 
GlusterFileSystem, which writes to a local directory which FUSE mounts a proxy to a gluster system.


* Supported OS is GNU/Linux
* GlusterFS installed on all machines in the cluster
* Java Runtime Environment (JRE)
* Maven 3x (needed if you are building the plugin from source)
* JDK 6+ (needed if you are building the plugin from source)

NOTE: Plugin relies on two *nix command line utilities to function properly. They are:

* mount: Used to mount GlusterFS volumes.
* getfattr: Used to fetch Extended Attributes of a file

Make sure they are installed on all hosts in the cluster and their locations are in $PATH
environment variable.


** NOTE: Example below is for Hadoop version 0.20.2 ($GLUSTER_HOME/hdfs/0.20.2) **

* Building the plugin from source [Maven ( and JDK is required to build the plugin]

  Change to glusterfs-hadoop directory in the GlusterFS source tree and build the plugin.

  # cd $GLUSTER_HOME/hdfs/0.20.2
  # mvn package

  On a successful build the plugin will be present in the `target` directory.
  (NOTE: version number will be a part of the plugin)

  # ls target/
  classes  glusterfs-0.20.2-0.1.jar  maven-archiver  surefire-reports  test-classes

  Copy the plugin to lib/ directory in your $HADOOP_HOME dir.

  # cp target/glusterfs-0.20.2-0.1.jar $HADOOP_HOME/lib

  Copy the sample configuration file that ships with this source (conf/core-site.xml) to conf
  directory in your $HADOOP_HOME dir.

  # cp conf/core-site.xml $HADOOP_HOME/conf

* Installing the plugin from RPM

  See the plugin documentation for installing from RPM.


  In case it is tedious to do the above steps(s) on all hosts in the cluster; use the script to
  build the plugin in one place and deploy it (along with the configuration file on all other hosts).

  This should be run on the host which is that hadoop master [Job Tracker].

* STEPS (You would have done Step 1 and 2 anyway while deploying Hadoop)

  1. Edit conf/slaves file in your hadoop distribution; one line for each slave.
  2. Setup password-less ssh b/w hadoop master and slave(s).
  3. Edit conf/core-site.xml with all glusterfs related configurations (see CONFIGURATION)
  4. Run the following
     # cd $GLUSTER_HOME/hdfs/0.20.2/tools
     # python ./ -b -d /path/to/hadoop/home -c

     This will build the plugin and copy it (and the config file) to all slaves (mentioned in $HADOOP_HOME/conf/slaves).

   Script options:
     -b : build the plugin
     -d : location of hadoop directory
     -c : deploy core-site.xml
     -m : deploy mapred-site.xml
     -h : deploy


  All plugin configuration is done in a single XML file (core-site.xml) with <name><value> tags in each <property>

  Brief explanation of the tunables and the values they accept (change them where-ever needed) are mentioned below

  name:  fs.glusterfs.impl
  value: org.apache.hadoop.fs.glusterfs.GlusterFileSystem

         The default FileSystem API to use (there is little reason to modify this).

  value: glusterfs:///

         The default name that hadoop uses to represent file as a URI (typically a server:port tuple). Use any host
         in the cluster as the server and any port number. This option has to be in server:port format for hadoop
         to create file URI; but is not used by plugin.

  name:  fs.glusterfs.volname
  value: volume-dist-rep

         The volume to mount.

  name:  fs.glusterfs.mount
  value: /mnt/glusterfs

         This is the directory where the gluster volume is mounted

  name:  fs.glusterfs.server
  value: localhost

         To mount a volume the plugin needs to know the hostname or the IP of a GlusterFS server in the cluster.
         Mention it here.


  Once configured, start Hadoop Map/Reduce daemons

  # ./bin/

  If the map/reduce job/task trackers are up, all I/O will be done to GlusterFS.


* Source Layout (./src/)

For the overall architecture, see.  Currently, we use the hadoop RawLocalFileSystem as 
the basis - and wrap it with the GlusterVolume class.  That class is then used by the 
Hadoop 1x (GlusterFileSystem) and Hadoop 2x (GlusterFs) adapters.

./tools/                                                  <--- Build and Deployment Script
./conf/core-site.xml                                                         <--- Sample configuration file
./pom.xml                                                                    <--- build XML file (used by maven)

./COPYING                                                                    <--- License
./README                                                                     <--- This file


  #Method 1) Modify JENKINS_USER in /etc/sysconfig/jenkins

  #Method 2) Directly modify /etc/init.d/jenkins 
  #daemon --user "$JENKINS_USER" --pidfile "$JENKINS_PID_FILE" $JAVA_CMD $PARAMS > /dev/null
  daemon --user root --pidfile "$JENKINS_PID_FILE" $JAVA_CMD $PARAMS > /dev/null


Building requires a working gluster mount for unit tests. 
The unit tests read test resources from - a file which should be present 

1) edit your .bashrc, or else at your terminal run : 

export GLUSTER_MOUNT=/mnt/glusterfs
export HCFS_FILE_SYSTEM_CONNECTOR=org.apache.hadoop.fs.test.connector.glusterfs.GlusterFileSystemTestConnector 
export HCFS_CLASSNAME=org.apache.hadoop.fs.glusterfs.GlusterFileSystem

(in eclipse - see below , you will add these at the "Run Configurations" menu,
in VM arguments, prefixed with -D, for example, "-DGLUSTER_MOUNT=x -DHCFS_FILE_SYSTEM_CONNECTOR=y ...")

2) run: 
   mvn clean package 
3) The jar artifact will be in target/


0) Create a mock gluster mount: 
 #Create raw disk and format it...
 truncate -s 1G /export/debugging_fun.brick
 sudo mkfs.xfs  /export/debugging_fun.brick

 #Mount it as loopback fs
 mount -o loop /export/debugging_fun.brick /mnt/mybrick ;

 #Now make a mount point for it, and also, for gluster itself
 mkdir /mnt/mybrick/glusterbrick
 mkdir /mnt/glusterfs
 #Create a gluster volume that writes to the brick
 sudo gluster volume create HadoopVol$BRICK 

 #Mount the volume on top of the newly created brick
 mount -t glusterfs mount -t glusterfs $(hostname):HadoopVol $MNT

1) Run "mvn eclipse:eclipse", and import into eclipse.

2) Add the exported env variables above via Run Configurations as described in the above section.

3) Develop and run unit tests as you would any other java app.