Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Commits on May 15, 2012
  1. @galthaus

    Merge pull request #3 from galthaus/pull-req-master-fce24911d2

    galthaus authored
    Merge essex-hack back into development. [6/26]
Commits on Apr 5, 2012
  1. @galthaus

    Merge pull request #2 from VictorLowther/pull-req-release-essex-hack-…

    galthaus authored
    …master-065ad4c283
    
    Pull changes from development into essex-hack [7/16]
Commits on Mar 24, 2012
  1. switch ui to use apachehadoop meta barclamp.

    cloudedge authored
Commits on Mar 17, 2012
  1. @VictorLowther

    Fix up Hadoop repo deps.

    VictorLowther authored
Commits on Feb 22, 2012
  1. @VictorLowther
Commits on Feb 14, 2012
  1. @galthaus

    Merge pull request #1 from galthaus/b925d97198badb6a88d5ce9ea9c5c5676…

    galthaus authored
    …8c8d9f3
    
    DNS fixes (alias support and rh/ubuntu fixes) + Role by state configuration [6/26]
Commits on Feb 13, 2012
  1. @galthaus
Commits on Jan 27, 2012
  1. @galthaus

    Update the parted call to support non-standard geometries of virtual …

    galthaus authored
    …disks.
    
    -1s == -1M and works better.
Commits on Jan 26, 2012
  1. @VictorLowther
  2. @VictorLowther
  3. @VictorLowther
  4. @VictorLowther
Commits on Jan 18, 2012
  1. @VictorLowther
  2. @VictorLowther
Commits on Dec 10, 2011
  1. works with new drag and drop contraints. This is the ONLY barclamp wi…

    Rob @Zehicle Hirschfeld authored
    …th components that can be installed on the admin node.
Commits on Nov 22, 2011
  1. add menu localization

    Rob @Zehicle Hirschfeld authored
  2. add meta view

    Rob @Zehicle Hirschfeld authored
    Signed-off-by: Rob @Zehicle Hirschfeld <rob_hirschfeld@dell.com>
  3. @galthaus
Commits on Nov 17, 2011
  1. HADOOP BARCLAMP opensource project - Removed Cloudera enterprise SCM

    Paul Webster authored
    packages.
Commits on Nov 2, 2011
  1. Merge branch 'master' of build:barclamp-hadoop

    Rob @Zehicle Hirschfeld authored
  2. new user guide location and menu

    Rob @Zehicle Hirschfeld authored
Commits on Oct 27, 2011
  1. @VictorLowther
  2. @VictorLowther
  3. @VictorLowther

    Make configure-disks immune to SCSI device reordering and foriegn dev…

    VictorLowther authored
    …ices
    
    Switch to using filesystem UUIDs for mouting and mountpoint naming.
    This makes the order in which disks were enumerated irrelavent, and enables
    us to ignore disks that we did not configure (along with printing out a
    warninig during the chef-client run).
  4. @VictorLowther

    Clean up status checking in the smoketest a bit more.

    VictorLowther authored
    Avoid sleeping if we don't have to when testing to see what nodes
    have what roles.
    
    Make sure we are looking at the "right" line when checking for hadoop
    service liveness with ps -aux.
  5. @VictorLowther
Commits on Oct 26, 2011
  1. @VictorLowther

    Fix up hadoop service testing for the nodes.

    VictorLowther authored
    Annoyingly enough, the nodes might not be fully indexed by the time
    we try to find out what roles are on what machines.  We will have to
    spin on finding nodes until everything is indexed.
    
    Equally annoying, the hadoop service machinery has an annoying tendency
    to report that the pidfile exists but the service is dead even when the
    service is in fact alive and well.  Switch to grepping the output of
    ps to check for service liveness instead.
Commits on Oct 25, 2011
  1. @VictorLowther
  2. @VictorLowther

    Fix up hdfs and mapred configuration on the drives on the slave nodes.

    VictorLowther authored
      * The disk recipies were not properly segregating the hdfs and
        mapreduce data on the data drives on the slaves, and they were
        not creating mapred directories on each drive.  configure-disks
        was modified to pass correct paths for dfs_data_dir and
        mapred_local_dir to the slavenode recipe for each drive it
        configured.
      * The disk handling loop in configure-disks was modified to
        use an array intead of a hash, and the array is sorted before
        we do anything.  This helps keep the order in which we perform
        operations on the drives be predictable, although it is not
        "perfect" behaviour.
      * Removed lost+found handling from the slavenode recipe, since
        we no longer store hdfs data right at the root of the drives.
  3. @VictorLowther

    Rewrite configure-disks.rb and synchronize slavenode.rb with the chan…

    VictorLowther authored
    …ges.
    
      * We no longer have a disk_configured flag.  Instead, configure-disks
        is idempotent.  This allows the cluster to more or less seamlessly
        handle disks coming and going (for replacment or whatever) without
        having to change anything in Crowbar.
      * configure-disks no longer uses a custom-compiled parted.
        The rewritten recipe just tells parted to create a single GPT
        partition that spans the entire drive if there is no first partition.
        This takes advantage of the newly added nuke drives before hardware
        isntalling phase.  We also nuke the first 65K of any newly-created
        partitions to ensure that filesystem checks are not confused by
        legacy data if any happens to be there.
      * configure-disks will have parted try to align the start of the partition
        to start 1MB into the drive.  This will make sure that the resulting
        filesystem is optimally alighend to minimize RMW cycles for virtually
        every drive type out there.
      * configure-disks takes care to make sure the kernel knows about any and
        all partitioning changes.  We partprobe each drive before and after messing
        with the partitions to ensure that we don't have any confusion about
        the partition tables on the disk vs. what the kernel sees.  We also
        minimize reliance on device files in /dev, because it can take udev a
        second or two to catch up with the kernel when things change.
      * Use blkid instead of tune2fs for filesystem presence testing.
        This makes it easier to port the code to handle different filesystems.
      * Use fork/exec instead of threads when formatting the drives.
        This gets the same parallelization that threads does, but it is at
        the proper level of granularity on a Unix system, and is slightly
        simpler and more robust.
      * Move responsibility for creating the dfs_data_dir array into
        configure_disks.
  4. @VictorLowther

    Fix up some annoyances in the default and masternamenode recipies.

    VictorLowther authored
      * It is OK to be missing your slavenodes when the cluster is coming up.
        In Hadoop, slavenodes are ephemeral, and we should not mark the config
        as invalid just because we don't have them.  Besides, this eliminates
        and unneeded round of chef-client calls.
      * Do not bounce the namenode and jobtracker services every time the slaves
        file changes.  The services have built-in logic to handle slavenodes
        coming and going and they watch the slaves file for changes -- the most
        we may have to to is ask them to rescan, but even that is not needed
        most of the time.
  5. @VictorLowther

    Add a working smoketest for Hadoop.

    VictorLowther authored
    This smoketest will:
    
      * Deploy Hadoop.
        We change dfs.replication to 1, and we do not reserve any space on the
        data drives.
      * Verify that all the required services are running on the nodes.
        If they are not, it will run chef-client and sleep to get the cluster
        to synchronize. We force the runs because we do not want to wait the
        15 - 30 minutes it may usually take to get the cluster fully running.
      * Make sure we have enough time for all the chef-clients and sleeps to
        run by giving us 900 seconds to finish the test.
Commits on Oct 24, 2011
  1. @galthaus

    still more typos.

    galthaus authored
  2. @galthaus

    Undo Greg's bug fix.

    galthaus authored
Something went wrong with that request. Please try again.