Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

cluster_chef renamed to metachef: too confusing otherwise

  • Loading branch information...
commit 9aa80fa7f7e1227150c810dd15e4fe4d8a0a31d1 0 parents
Philip (flip) Kromer mrflip authored
Showing with 24,350 additions and 0 deletions.
  1. +3 −0  .rspec
  2. +20 −0 .watchr
  3. +15 −0 Gemfile
  4. +104 −0 README.md
  5. +18 −0 attributes/default.rb
  6. +59 −0 definitions/daemon_user.rb
  7. +27 −0 definitions/kill_old_service.rb
  8. +52 −0 definitions/standard_dirs.rb
  9. +126 −0 libraries/aspect.rb
  10. +170 −0 libraries/aspects.rb
  11. +160 −0 libraries/attr_struct.rb
  12. +152 −0 libraries/component.rb
  13. +55 −0 libraries/cookbook_utils.rb
  14. +117 −0 libraries/discovery.rb
  15. +55 −0 libraries/discovery_lol.rb
  16. +83 −0 libraries/dump_aspects.rb
  17. +18 −0 libraries/metachef.rb
  18. +91 −0 libraries/node_utils.rb
  19. +38 −0 metadata.rb
  20. +20 −0 recipes/default.rb
  21. +33 −0 spec/aspect_spec.rb
  22. +173 −0 spec/aspects_spec.rb
  23. +58 −0 spec/attr_struct_spec.rb
  24. +138 −0 spec/component_spec.rb
  25. +116 −0 spec/discovery_spec.rb
  26. +2,171 −0 spec/fixtures/chef_node-el_ridiculoso-aqui-0.json
  27. +20,138 −0 spec/fixtures/chef_resources-el_ridiculoso-aqui-0.json
  28. +30 −0 spec/spec_helper.rb
  29. +110 −0 spec/spec_helper/dummy_chef.rb
3  .rspec
@@ -0,0 +1,3 @@
+--color
+--format documentation
+--drb
20 .watchr
@@ -0,0 +1,20 @@
+# -*- ruby -*-
+
+def run_spec(file)
+ file = File.expand_path(file, File.dirname(__FILE__))
+ unless File.exist?(file)
+ Watchr.debug "#{file} does not exist"
+ return
+ end
+
+ Watchr.debug "Running #{file}"
+ system "rspec #{file}"
+end
+
+watch("spec/.*_spec\.rb") do |match|
+ run_spec(match[0])
+end
+
+watch("libraries/(.*)\.rb") do |match|
+ run_spec(%{spec/#{match[1]}_spec.rb})
+end
15 Gemfile
@@ -0,0 +1,15 @@
+source "http://rubygems.org"
+
+gem 'chef', "~> 0.10.4"
+
+# Add dependencies to develop your gem here.
+# Include everything needed to run rake, tests, features, etc.
+group :development do
+ gem 'bundler', "~> 1"
+ gem 'yard', "~> 0.6.7"
+ gem 'jeweler', "~> 1.6.4"
+ gem 'rspec', "~> 2.7.0"
+ gem 'watchr', "~> 0.7"
+ # gem 'ruby-fsevent', "~> 0.2"
+ # gem 'rev'
+end
104 README.md
@@ -0,0 +1,104 @@
+# cluster_chef chef cookbook
+
+Installs/Configures cluster_chef
+
+## Overview
+
+Cookbooks repeatably express these and other aspects:
+* "I launch these daemons: ..."
+* "I haz a bukkit, itz naem '/var/log/lol'"
+* "I have a dashboard at 'http://....:...'"
+* ... and much more.
+
+Wouldn't it be nice if announcing a log directory caused...
+ - my log rotation system to start rotating my logs?
+ - a 'disk free space' gauge to be added to the monitoring dashboard for that service?
+ - flume (or whatever) began picking up my logs and archiving them to a predictable location?
+ - in the case of standard apache logs, a listener to start counting the rate of requests, 200s, 404s and so forth?
+Similarly, announcing ports should mean
+ - the firewall and security groups configure themselves correspondingly
+ - the monitor system starts regularly pinging the port for uptime and latency
+ - and pings the interfaces that it should *not* appear on to ensure the firewall is in place?
+
+Cluster chef make those aspects standardized and predictable, and provides integration and discovery hooks. The key is to make integration *inevitable*: No more forgetting to rotate or monitor a service, or having a config change over here screw up a dependent system over there.
+
+__________________________________________________________________________
+
+(*below is a planning document and may not perfectly reflect reality*)
+
+FIXME: **update for version_3 release**
+
+Attributes are scoped by *cookbook* and then by *component*.
+* If I declare `i_haz_a_service_itz('redis)`, it will look in `node[:redis]`.
+* If I declare `i_haz_a_service_itz('hadoop-namenode')`, it will look in `node[:hadoop]` for cookbook-wide concerns and `node[:hadoop][:namenode]` for component-specific concerns.
+
+* The cookbook scope is always named for its cookbook. Its attributes live in`node[:cookbook_name]`.
+ - if everything in the cookbook shares a concern, it sits at cookbook level. So the hadoop log directory (shared by all its components) is at `(scratch_root)/hadoop/log`.
+* If there is only one component, it can be implicitly named for its cookbook. In this case, it is omitted: the component attributes live in `node[:cookbook_name]` (which is the same as the component name).
+* If there are multiple components, they will live in `node[:cookbook_name][:component_name]` (eg `[:hadoop][:namenode]` or `[:flume][:master]`. In file names, these become `(whatever)/cookbook_name/component_name/(whatever)`; in other cases they are joined as `cookbook_name-component_name`.
+
+Allow nodes to discover the location for a given service at runtime, adapting when new services register.
+
+### Discovery
+
+Allow nodes to discover the location for a given service at runtime, adapting
+when new services register.
+
+#### Operations:
+
+* register for a service. A timestamp records the last registry.
+* discover all chef nodes that have registered for the given service.
+* discover the most recent chef node for that service.
+* get the 'public_ip' for a service -- the address that nodes in the larger
+ world should use
+* get the 'public_ip' for a service -- the address that nodes on the local
+ subnet / private cloud should use
+
+#### Implementation
+
+Nodes register a service by calling `announce`, which sets a hash containing
+'timestamp' (the time of registry) and other metadata passed in.
+
+## Attributes
+
+* `[:tuning][:ulimit]` -
+* `[:tuning][:overcommit_memory]` - (default: "1")
+* `[:tuning][:overcommit_ratio]` - (default: "100")
+* `[:tuning][:swappiness]` - (default: "5")
+* `[:cluster_chef][:conf_dir]` - (default: "/etc/cluster_chef")
+* `[:cluster_chef][:log_dir]` - (default: "/var/log/cluster_chef")
+* `[:cluster_chef][:home_dir]` - (default: "/etc/cluster_chef")
+* `[:cluster_chef][:user]` - (default: "root")
+* `[:cluster_chef][:thttpd][:port]` - (default: "6789")
+* `[:cluster_chef][:dashboard][:run_state]` - (default: "start")
+* `[:users][:root][:primary_group]` - (default: "root")
+
+## Recipes
+
+* `burn_ami_prep` - Burn Ami Prep
+* `dashboard` - Lightweight dashboard for this machine: index of services and their dashboard snippets
+* `default` - Base configuration for cluster_chef
+* `virtualbox_metadata` - Virtualbox Metadata
+## Integration
+
+Supports platforms: debian and ubuntu
+
+
+## License and Author
+
+Author:: Philip (flip) Kromer - Infochimps, Inc (<coders@infochimps.com>)
+Copyright:: 2011, Philip (flip) Kromer - Infochimps, Inc
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+> readme generated by [cluster_chef](http://github.com/infochimps/cluster_chef)'s cookbook_munger
18 attributes/default.rb
@@ -0,0 +1,18 @@
+
+default[:cluster_chef][:conf_dir] = '/etc/cluster_chef'
+default[:cluster_chef][:log_dir] = '/var/log/cluster_chef'
+default[:cluster_chef][:home_dir] = '/etc/cluster_chef'
+
+default[:cluster_chef][:user] = 'root'
+
+# Request user account properties here.
+default[:users]['root'][:primary_group] = value_for_platform(
+ "openbsd" => { "default" => "wheel" },
+ "freebsd" => { "default" => "wheel" },
+ "mac_os_x" => { "default" => "wheel" },
+ "default" => "root"
+)
+
+default[:announces] ||= Mash.new
+
+default[:discovers] ||= Mash.new
59 definitions/daemon_user.rb
@@ -0,0 +1,59 @@
+
+#
+# If present, we will use node[(name)][(component)] *and then* node[(name)] to
+# look up scoped default values.
+#
+# So, daemon_user('ntp') looks for a username in node[:ntp][:user], while
+# daemon_user('ganglia.server') looks first in node[:ganglia][:server][:user]
+# and then in node[:ganglia][:user].
+#
+define(:daemon_user,
+ :action => [:create, :manage], # action. You typically want [:create, :manage] or [:create]
+ :component => nil, # if present, will use node[(name)][(component)] *and then* node[(name)] to look up values.
+ :user => nil, # username to create. default: `scoped_hash[:user]`
+ :home => nil, # home directory for daemon. default: `scoped_hash[:pid_dir]`
+ :group => nil, # group for daemon. default: `scoped_hash[:group]`
+ :comment => nil, # comment for user info
+ :create_group => true # Action to take on the group: `true` means `[:create]`, false-y means do nothing, or you can supply explicit actions (eg `[:create, :manage]`). default: true
+ ) do
+
+ sys, subsys = params[:name].to_s.split(".", 2).map(&:to_sym)
+ component = ClusterChef::Component.new(node, sys, subsys)
+
+ params[:user] ||= component.node_attr(:user, :required)
+ params[:group] ||= component.node_attr(:group) || params[:user]
+ params[:home] ||= component.node_attr(:pid_dir, :required)
+ params[:comment] ||= "#{component.name} daemon"
+ #
+ user_val = params[:user].to_s
+ group_val = params[:group].to_s
+ uid_val = node[:users ][user_val ] && node[:users ][user_val ][:uid]
+ gid_val = node[:groups][group_val] && node[:groups][group_val][:gid]
+ #
+ params[:create_group] = [:create] if (params[:create_group] == true)
+ params[:create_group] = false if (group_val == 'nogroup')
+
+ #
+ # Make the group
+ #
+ if params[:create_group] && (group_val != 'nogroup')
+ group group_val do
+ gid gid_val
+ action params[:create_group]
+ end
+ end
+
+ #
+ # Make the user
+ #
+ user user_val do
+ uid uid_val
+ gid group_val
+ password nil
+ shell '/bin/false'
+ home params[:home]
+ supports :manage_home => false # you must create standard dirs yourself
+ action params[:action]
+ end
+
+end
27 definitions/kill_old_service.rb
@@ -0,0 +1,27 @@
+
+define(:kill_old_service,
+ :script => nil
+ ) do
+ params[:script] ||= "/etc/init.d/#{params[:name]}"
+
+ # cheating: don't bother if the script isn't there
+ if (File.exists?(params[:script]))
+
+ service params[:name] do
+ action [:stop, :disable]
+ pattern params[:pattern] if params[:pattern]
+ only_if{ File.exists?(params[:script]) }
+ end
+
+ ruby_block("stop #{params[:name]}") do
+ block{ }
+ action :create
+ notifies :stop, "service[#{params[:name]}]", :immediately
+ only_if{ File.exists?(params[:script]) }
+ end
+
+ file(params[:script]) do
+ action :delete
+ end
+ end
+end
52 definitions/standard_dirs.rb
@@ -0,0 +1,52 @@
+STANDARD_DIRS = Mash.new({
+ :home_dir => { :uid => 'root', :gid => 'root', },
+ :conf_dir => { :uid => 'root', :gid => 'root', },
+ :lib_dir => { :uid => 'root', :gid => 'root', },
+ :log_dir => { :uid => :user, :gid => :group, :mode => "0775", },
+ :pid_dir => { :uid => :user, :gid => :group, },
+ :tmp_dir => { :uid => :user, :gid => :group, },
+ :data_dir => { :uid => :user, :gid => :group, },
+ :data_dirs => { :uid => :user, :gid => :group, },
+ :cache_dir => { :uid => :user, :gid => :group, },
+}) unless defined?(STANDARD_DIRS)
+
+#
+# If present, we will use node[(name)][(subsys)] *and then* node[(name)] to
+# look up scoped default values.
+#
+# So, daemon_user('ntp') looks for its :log_dir in node[:ntp][:log_dir], while
+# daemon_user('ganglia.server') looks first in node[:ganglia][:server][:log_dir]
+# and then in node[:ganglia][:log_dir].
+#
+define(:standard_dirs,
+ :subsys => nil, # if present, will use node[(name)][(subsys)] *and then* node[(name)] to look up values.
+ :directories => [],
+ :log_dir => nil,
+ :home_dir => nil,
+ :user => nil, # username to create. default: `scoped_hash[:user]`
+ :group => nil # group for user. default: `scoped_hash[:group]`
+ ) do
+
+ sys, subsys = params[:name].to_s.split(".", 2).map(&:to_sym)
+ component = ClusterChef::Component.new(node, sys, subsys)
+
+ params[:user] ||= component.node_attr(:user, :required)
+ params[:group] ||= component.node_attr(:group) || params[:user]
+
+ [params[:directories]].flatten.each do |dir_type|
+ dir_paths = component.node_attr(dir_type, :required) or next
+ hsh = (STANDARD_DIRS.include?(dir_type) ? STANDARD_DIRS[dir_type].dup : Mash.new)
+ hsh[:uid] = params[:user] if (hsh[:uid] == :user )
+ hsh[:gid] = params[:group] if (hsh[:gid] == :group)
+ [dir_paths].flatten.each do |dir_path|
+ directory dir_path do
+ owner hsh[:uid]
+ group hsh[:gid]
+ mode hsh[:mode] || '0755'
+ action :create
+ recursive true
+ end
+ end
+ end
+
+end
126 libraries/aspect.rb
@@ -0,0 +1,126 @@
+require File.expand_path('cluster_chef.rb', File.dirname(__FILE__))
+
+module ClusterChef
+
+ # An *aspect* is an external property, commonly encountered across multiple
+ # systems, that decoupled agents may wish to act on.
+ #
+ # For example, many systems have a Dashboard aspect -- phpMySQL, the hadoop
+ # jobtracker web console, a one-pager generated by cluster_chef's
+ # mini_dashboard recipe, or a purpose-built backend for your website. The
+ # following independent concerns can act on such dashboard aspects:
+ # * a dashboard dashboard creates a page linking to all of them
+ # * your firewall grants access from internal machines and denies access on
+ # public interfaces
+ # * the monitoring system checks that the port is open and listening
+ #
+ # Aspects are able to do the following:
+ #
+ # * Convert to and from a plain hash,
+ #
+ # * ...and thusly to and from plain node metadata attributes
+ #
+ # * discover its manifestations across all systems (on all or some
+ # machines): for example, all dashboards, or all open ports.
+ #
+ # * identify instances from a system's by-convention metadata. For
+ # example, given a chef server system at 10.29.63.45 with attributes
+ # `:chef_server => { :server_port => 4000, :dash_port => 4040 }`
+ # the PortAspect class would produce instances for 4000 and 4040, since by
+ # convention an attribute ending in `_port` means "I have a port aspect`;
+ # the DashboardAspect would recognize the `dash_port` attribute and
+ # produce an instance for `http://10.29.63.45:4040`.
+ #
+ # Note:
+ #
+ # * separate *identifiable conventions* from *concrete representation* of
+ # aspects. A system announces that it has a log aspect, and by convention
+ # declares a `:log_dir` attribute. At that point it is regularized into a
+ # LogAspect instance and stored in the `node[:aspects]` tree. External
+ # concerns should only inspect these concrete Aspects, and never go
+ # hunting for thins with a `:log_dir` attribute.
+ #
+ # * conventions can be messy, but aspects are perfectly uniform
+ #
+ class Aspect
+ include AttrStruct
+ extend ClusterChef::NodeUtils
+
+ dsl_attr(:component, :kind_of => ClusterChef::Component)
+ dsl_attr(:name, :kind_of => [String, Symbol])
+
+ # checks that the aspect is well-formed. returns non-empty array if there is lint.
+ #
+ # @abstract
+ # override to provide guidance, filling an array with warning strings. Include
+ # errors + super
+ # as the last line.
+ #
+ def lint
+ []
+ end
+
+ def lint!
+ lint.flatten.compact.each{|l| Chef::Log.warn(l) }
+ end
+
+ def lint_flavor
+ self.class.allowed_flavors.include?(self.flavor) ? [] : ["Unexpected #{self.class.handle} flavor #{flavor.inspect}"]
+ end
+
+ # include AttrStruct::ClassMethods
+ # include ClusterChef::NodeUtils
+
+ def self.register!
+ ClusterChef::Component.has_aspect(self)
+ end
+
+ #
+ # Extract attributes matching the given pattern.
+ #
+ # @param [Hash] info -- hash of key-val pairs
+ # @param [Regexp] regex -- filter for keys matching this pattern
+ #
+ # @yield on each match
+ # @yieldparam [String, Symbol] key -- the matching key
+ # @yieldparam [Object] val -- its value in the info hash
+ # @yieldparam [MatchData] match -- result of the regexp match
+ # @yieldreturn [Aspect] block should return an aspect
+ #
+ # @return [Array<Aspect>] collection of the block's results
+ def self.attr_matches(component, regexp)
+ results = Mash.new
+ component.node_info.each do |key, val|
+ next unless (match = regexp.match(key.to_s))
+ result = yield(key, val, match) or next
+ result.lint!
+ results[result.name] ||= result
+ end
+ results
+ end
+
+ def self.rsrc_matches(rsrc_clxn, resource_name, cookbook_name)
+ results = Mash.new
+ rsrc_clxn.each do |rsrc|
+ next unless rsrc.resource_name.to_s == resource_name.to_s
+ next unless rsrc.cookbook_name.to_s =~ /#{cookbook_name}/
+ result = yield(rsrc) or next
+ results[result.name] ||= result
+ end
+ results
+ end
+
+ # strip off module part and '...Aspect' from class name
+ # @example ClusterChef::FooAspect.handle # :foo
+ def self.handle
+ @handle ||= self.name.to_s.gsub(/.*::(\w+)Aspect\z/,'\1').gsub(/([a-z\d])([A-Z])/,'\1_\2').downcase.to_sym
+ end
+
+ def self.plural_handle
+ "#{handle}s".to_sym
+ end
+
+ # end
+ # def self.included(base) ; base.extend(ClassMethods) ; end
+ end
+end
170 libraries/aspects.rb
@@ -0,0 +1,170 @@
+require File.expand_path('cluster_chef.rb', File.dirname(__FILE__))
+module ClusterChef
+
+ #
+ # * scope[:run_state]
+ #
+ # from the eponymous service resource,
+ # * service.path
+ # * service.pattern
+ # * service.user
+ # * service.group
+ #
+ class DaemonAspect < Aspect
+ register!
+ dsl_attr(:service_name, :kind_of => String)
+ dsl_attr(:pattern, :kind_of => String)
+ dsl_attr(:run_state, :kind_of => [String, Symbol])
+ dsl_attr(:service_name, :kind_of => String)
+
+ def self.harvest(run_context, component)
+ rsrc_matches(run_context.resource_collection, :service, component.sys) do |rsrc|
+ next unless rsrc.name =~ /#{component.name}/
+ svc = self.new(component, rsrc.name, rsrc.service_name, rsrc.pattern)
+ svc.run_state(component.node_info[:run_state])
+ svc
+ end
+ end
+
+ def lint
+ errs = super
+ if not %w[stop start nothing].include?(run_state.to_s)
+ badness = run_state ? "Odd run_state #{run_state}" : "No run_state"
+ err = "#{badness} for daemon #{name}: set node[:#{component.sys}][:#{component.subsys}] to :stop, :start or :nothing"
+ Chef::Log.warn(err)
+ errs << err
+ end
+ errs
+ end
+ end
+
+ class PortAspect < Aspect
+ register!
+ dsl_attr(:flavor, :kind_of => Symbol, :coerce => :to_sym)
+ dsl_attr(:port_num, :kind_of => String)
+ dsl_attr(:addrs, :kind_of => Array)
+
+ ALLOWED_FLAVORS = [ :ssh, :ntp, :ldap, :smtp, :ftp, :http, :pop, :nntp, :imap, :tcp, :https, :telnet ]
+ def self.allowed_flavors() ALLOWED_FLAVORS ; end
+
+ def self.harvest(run_context, component)
+ attr_aspects = attr_matches(component, /^((.+_)?port)$/) do |key, val, match|
+ name = match[1]
+ flavor = match[2].to_s.empty? ? :port : match[2].gsub(/_$/, '').to_sym
+ # p [match.captures, name, flavor].flatten
+ self.new(component, name, flavor, val.to_s)
+ end
+ end
+ end
+
+ class DashboardAspect < Aspect
+ register!
+ dsl_attr(:flavor, :kind_of => Symbol, :coerce => :to_sym)
+ dsl_attr(:url, :kind_of => String)
+
+ ALLOWED_FLAVORS = [ :http, :jmx ]
+ def self.allowed_flavors() ALLOWED_FLAVORS ; end
+
+ def self.harvest(run_context, component)
+ attr_aspects = attr_matches(component, /^(.*dash)_port(s)?$/) do |key, val, match|
+ name = match[1]
+ flavor = (name == 'dash') ? :http_dash : name.to_sym
+ url = "http://#{private_ip_of(run_context.node)}:#{val}/"
+ self.new(component, name, flavor, url)
+ end
+ end
+ end
+
+ #
+ # * scope[:log_dirs]
+ # * scope[:log_dir]
+ # * flavor: http, etc
+ #
+ class LogAspect < Aspect
+ register!
+ dsl_attr(:flavor, :kind_of => Symbol, :coerce => :to_sym)
+ dsl_attr(:dirs, :kind_of => Array)
+
+ ALLOWED_FLAVORS = [ :http, :log4j, :rails ]
+
+ def self.harvest(run_context, component)
+ attr_matches(component, /^log_dir(s?)$/) do |key, val, match|
+ name = 'log'
+ dirs = Array(val)
+ self.new(component, name, name.to_sym, dirs)
+ end
+ end
+ end
+
+ #
+ # * attributes with a _dir or _dirs suffix
+ #
+ class DirectoryAspect < Aspect
+ def self.plural_handle() :directories ; end
+ register!
+ dsl_attr(:flavor, :kind_of => Symbol, :coerce => :to_sym)
+ dsl_attr(:dirs, :kind_of => Array)
+
+ ALLOWED_FLAVORS = [ :home, :conf, :log, :tmp, :pid, :data, :lib, :journal, ]
+ def self.allowed_flavors() ALLOWED_FLAVORS ; end
+
+ def self.harvest(run_context, component)
+ attr_aspects = attr_matches(component, /(.*)_dir(s?)$/) do |key, val, match|
+ name = match[1]
+ val = Array(val)
+ self.new(component, name, name.to_sym, val)
+ end
+ rsrc_aspects = rsrc_matches(run_context.resource_collection, :directory, component.sys) do |rsrc|
+ rsrc
+ end
+ # [attr_aspects, rsrc_aspects].flatten.each{|x| p x }
+ attr_aspects
+ end
+ end
+
+ #
+ # Code assets (jars, compiled libs, etc) that another system may wish to
+ # incorporate
+ #
+ class ExportedAspect < Aspect
+ register!
+ dsl_attr(:flavor, :kind_of => Symbol, :coerce => :to_sym)
+ dsl_attr(:files, :kind_of => Array)
+
+ ALLOWED_FLAVORS = [:jars, :confs, :libs]
+ def self.allowed_flavors() ALLOWED_FLAVORS ; end
+
+ def lint
+ super() + [lint_flavor]
+ end
+
+ def self.harvest(run_context, component)
+ attr_matches(component, /^exported_(.*)$/) do |key, val, match|
+ name = match[1]
+ self.new(component, name, name.to_sym, val)
+ end
+ end
+ end
+
+ #
+ # manana
+ #
+
+ # # usage constraints -- ulimits, java heap size, thread count, etc
+ # class UsageLimitAspect
+ # end
+ # # deploy
+ # # package
+ # # account (user / group)
+ # class CookbookAspect < Struct.new(:component, :name,
+ # :deploys, :packages, :users, :groups, :depends, :recommends, :supports,
+ # :attributes, :recipes, :resources, :authors, :license, :version )
+ # end
+ #
+ # class CronAspect
+ # end
+ #
+ # class AuthkeyAspect
+ # end
+
+end
160 libraries/attr_struct.rb
@@ -0,0 +1,160 @@
+module ClusterChef
+ module AttrStruct
+ include Chef::Mixin::ParamsValidate
+
+ module ClassMethods
+ def keys() [] ; end
+ def keys=(val)
+ singleton_class = class << self ; self ; end
+ singleton_class.class_eval do
+ remove_method(:keys) rescue NameError
+ define_method(:keys){ val }
+ end
+ val
+ end
+
+ def dsl_attr(name, validation={})
+ name = name.to_sym
+ coerce = validation.delete(:coerce)
+ dup_default = validation.delete(:dup_default)
+ validation.delete(:doc)
+ define_method(name) do |val=nil|
+ validation[:default] = dup_default.dup unless dup_default.nil?
+ val = val.send(coerce) if val && coerce
+ set_or_return(name, val, validation)
+ end
+ self.keys |= [name]
+ end
+ end
+ def self.included(base) base.extend(ClassMethods) ; end
+
+ def initialize(*args)
+ raise ArgumentError, "wrong number of arguments (#{args.length} for #{self.class.keys.length})" if args.length > self.class.keys.length
+ args.zip(self.class.keys).each do |val, attr|
+ self.send(attr, val)
+ end
+ end
+
+ def keys
+ self.class.keys
+ end
+ def []=(attr, val)
+ self.send(attr, val) if has_key?(attr)
+ end
+
+ def each_pair
+ self.class.keys.each do |attr|
+ yield [attr, self.send(attr)]
+ end
+ end
+ def has_key?(key)
+ keys.include?(key.to_sym)
+ end
+ def ==(val)
+ val.is_a?(self.class) && (val.to_hash == self.to_hash)
+ end
+
+ #
+ # Returns a hash with each key set to its associated value.
+ #
+ # @example
+ # FooClass = Struct(:a, :b)
+ # foo = FooClass.new(100, 200)
+ # foo.to_hash # => { :a => 100, :b => 200 }
+ #
+ # @return [Hash] a new Hash instance, with each key set to its associated value.
+ #
+ def to_mash
+ Mash.new.tap do |hsh|
+ each_pair do |key, val|
+ case
+ when val.respond_to?(:to_mash) then hsh[key] = val.to_mash
+ when val.respond_to?(:to_hash) then hsh[key] = val.to_hash
+ else hsh[key] = val
+ end
+ end
+ end
+ end
+ def to_hash() to_mash.to_hash ; end
+
+ #
+ # Adds the contents of +other_hash+ to +hsh+. If no block is
+ # specified, entries with duplicate keys are overwritten with the values from
+ # +other_hash+, otherwise the value of each duplicate key is determined by
+ # calling the block with the key, its value in +hsh+ and its value in
+ # +other_hash+.
+ #
+ # @example
+ # h1 = { :a => 100, :b => 200 }
+ # h2 = { :b => 254, :c => 300 }
+ # h1.merge!(h2)
+ # # => { :a => 100, :b => 254, :c => 300 }
+ #
+ # h1 = { :a => 100, :b => 200 }
+ # h2 = { :b => 254, :c => 300 }
+ # h1.merge!(h2){|key, v1, v2| v1 }
+ # # => { :a => 100, :b => 200, :c => 300 }
+ #
+ # @overload hsh.update(other_hash) -> hsh
+ # Adds the contents of +other_hash+ to +hsh+. Entries with duplicate keys are
+ # overwritten with the values from +other_hash+
+ # @param other_hash [Hash, AttrStruct] the hash to merge (it wins)
+ # @return [AttrStruct] this attr_struct, updated
+ #
+ # @overload hsh.update(other_hash){|key, oldval, newval| block} -> hsh
+ # Adds the contents of +other_hash+ to +hsh+. The value of each duplicate key
+ # is determined by calling the block with the key, its value in +hsh+ and its
+ # value in +other_hash+.
+ # @param other_hash [Hash, AttrStruct] the hash to merge (it wins)
+ # @yield [Object, Object, Object] called if key exists in each +hsh+
+ # @return [AttrStruct] this attr_struct, updated
+ #
+ def update(other_hash)
+ raise TypeError, "can't convert #{other_hash.nil? ? 'nil' : other_hash.class} into Hash" unless other_hash.respond_to?(:each_pair)
+ other_hash.each_pair do |key, val|
+ next unless keys.include?(key.to_sym)
+ if block_given? && has_key?(key)
+ val = yield(key, val, self.send(key))
+ end
+ self[key] = val
+ end
+ self
+ end
+ alias_method :merge!, :update
+
+ #
+ # Returns a new attr_struct containing the contents of +other_hash+ and the
+ # contents of +hsh+. If no block is specified, the value for entries with
+ # duplicate keys will be that of +other_hash+. Otherwise the value for each
+ # duplicate key is determined by calling the block with the key, its value in
+ # +hsh+ and its value in +other_hash+.
+ #
+ # @example
+ # h1 = { :a => 100, :b => 200 }
+ # h2 = { :b => 254, :c => 300 }
+ # h1.merge(h2)
+ # # => { :a=>100, :b=>254, :c=>300 }
+ # h1.merge(h2){|key, oldval, newval| newval - oldval}
+ # # => { :a => 100, :b => 54, :c => 300 }
+ # h1
+ # # => { :a => 100, :b => 200 }
+ #
+ # @overload hsh.merge(other_hash) -> hsh
+ # Adds the contents of +other_hash+ to +hsh+. Entries with duplicate keys are
+ # overwritten with the values from +other_hash+
+ # @param other_hash [Hash, AttrStruct] the hash to merge (it wins)
+ # @return [AttrStruct] a new merged attr_struct
+ #
+ # @overload hsh.merge(other_hash){|key, oldval, newval| block} -> hsh
+ # Adds the contents of +other_hash+ to +hsh+. The value of each duplicate key
+ # is determined by calling the block with the key, its value in +hsh+ and its
+ # value in +other_hash+.
+ # @param other_hash [Hash, AttrStruct] the hash to merge (it wins)
+ # @yield [Object, Object, Object] called if key exists in each +hsh+
+ # @return [AttrStruct] a new merged attr_struct
+ #
+ def merge(*args, &block)
+ self.dup.update(*args, &block)
+ end
+ end
+end
152 libraries/component.rb
@@ -0,0 +1,152 @@
+require File.expand_path('cluster_chef.rb', File.dirname(__FILE__))
+
+module ClusterChef
+ #
+ #
+ #
+ #
+ #
+ class Component
+ include ClusterChef::AttrStruct
+ include ClusterChef::NodeUtils
+ attr_reader(:node)
+ dsl_attr(:sys, :kind_of => Symbol, :coerce => :to_sym)
+ dsl_attr(:subsys, :kind_of => Symbol, :coerce => :to_sym)
+ dsl_attr(:name, :kind_of => String, :coerce => :to_s)
+ dsl_attr(:realm, :kind_of => Symbol, :coerce => :to_sym)
+ dsl_attr(:timestamp, :kind_of => String, :regex => /\d{10}/)
+
+ def initialize(node, sys, subsys, hsh={})
+ @node = node
+ super(sys, subsys)
+ self.name subsys.to_s.empty? ? sys.to_sym : "#{sys}_#{subsys}".to_sym
+ self.timestamp ClusterChef::NodeUtils.timestamp
+ merge!(hsh)
+ end
+
+ # A segmented name for the component
+ # @example
+ # ClusterChef::Component.new(rc, :redis, :server, :realm => 'krypton').fullname
+ # # => 'krypton-redis-server'
+ # ClusterChef::Component.new(rc, :nfs, nil, :realm => 'krypton').fullname
+ # # => 'krypton-nfs'
+ #
+ # @return [String] the component's dotted name
+ def fullname
+ self.class.fullname(realm, sys, subsys)
+ end
+
+ # A segmented name for the component
+ def self.fullname(realm, sys, subsys=nil)
+ "#{realm}-#{sys}-#{subsys}".to_s
+ end
+
+ #
+ # Sugar for essential node attributes
+ #
+
+ # Node's cluster name
+ def cluster() node[:cluster_name] ; end
+ # Node's facet name
+ def facet() node[:facet_name] ; end
+ # Node's facet index
+ def facet_index() node[:facet_index] ; end
+
+ def public_ip
+ public_ip_of(node)
+ end
+
+ def private_ip
+ private_ip_of(node)
+ end
+
+ def private_hostname
+ private_hostname_of(node)
+ end
+
+ #
+ # Aspects
+ #
+
+ # Harvest all aspects findable in the given node metadata hash
+ #
+ # @example
+ # component.harvest_all(run_context)
+ # component.dashboard(:webui) # #<DashboardAspect name='webui' url="http://10.x.x.x:4040/">
+ # component.port(:webui_dash_port) # #<PortAspect port=4040 addr="10.x.x.x">
+ #
+ def harvest_all(run_context)
+ self.class.aspect_types.each do |aspect_name, aspect_klass|
+ res = aspect_klass.harvest(run_context, self)
+ self.send(aspect_name, res)
+ end
+ end
+
+ # list of known aspects
+ def self.aspect_types
+ @aspect_types ||= Mash.new
+ end
+
+ # add this class to the list of registered aspects
+ def self.has_aspect(klass)
+ self.aspect_types[klass.plural_handle] = klass
+ dsl_attr(klass.plural_handle, :kind_of => Mash, :dup_default => Mash.new)
+ define_method(klass.handle) do |name, val=nil, &block|
+ hsh = self.send(klass.plural_handle)
+ #
+ hsh[name] = val if val
+ # instance eval if block given (auto-vivify if necessary
+ if block
+ hsh[name] ||= klass.new(self, name)
+ hsh[name].instance_eval(&block)
+ end
+ #
+ hsh[name]
+ end
+ end
+
+ #
+ # Serialize in/out of Node
+ #
+
+ # Combines the hash for a system with the hash for its given subsys.
+ # This lets us ask about the +:user+ for the 'redis.server' component,
+ # whether it's set in +node[:redis][:server][:user]+ or
+ # +node[:redis][:user]+. If an attribute exists on both the parent and
+ # subsys hash, the subsys hash's value wins (see +:user+ in the
+ # example below).
+ #
+ # @example
+ # node.to_hash
+ # # { :hadoop => {
+ # # :user => 'hdfs', :log_dir => '/var/log/hadoop',
+ # # :jobtracker => { :user => 'mapred', :port => 50030 } }
+ # # }
+ # node_info(:hadoop, jobtracker)
+ # # { :user => 'mapred', :log_dir => '/var/log/hadoop', :port => 50030,
+ # # :jobtracker => { :user => 'mapred', :port => 50030 } }
+ # node_info(:hadoop, nil)
+ # # { :user => 'hdfs', :log_dir => '/var/log/hadoop',
+ # # :jobtracker => { :user => 'mapred', :port => 50030 } }
+ #
+ #
+ def node_info
+ unless node[sys] then Chef::Log.warn("no system data in component '#{name}', node '#{node}'") ; return Mash.new ; end
+ hsh = Mash.new(node[sys].to_hash)
+ if node[sys][subsys]
+ hsh.merge!(node[sys][subsys])
+ elsif (subsys.to_s != '') && (not node[sys].has_key?(subsys))
+ Chef::Log.warn("no subsystem data in component '#{name}', node '#{node}'")
+ end
+ hsh
+ end
+
+ def node_attr(attr, required=nil)
+ if required && (not node_info.has_key?(attr))
+ Chef::Log.warn "No definition for #{attr} in #{name} - set node[:#{sys}][:#{subsys}][#{attr.inspect}] or node[:#{sys}][#{attr.inspect}]\n#{caller[0..4].join("\n ")}"
+ end
+ node_info[attr]
+ end
+
+ end
+end
55 libraries/cookbook_utils.rb
@@ -0,0 +1,55 @@
+
+module ClusterChef
+ module CookbookUtils
+
+ #
+ # Run state helpers
+ #
+
+ def run_state_includes?(hsh, state)
+ Array(hsh[:run_state]).map(&:to_s).include?(state.to_s)
+ end
+
+ def startable?(hsh)
+ run_state_includes?(hsh, :start)
+ end
+
+ #
+ # Assert node state
+ #
+ def complain_if_not_sun_java(program)
+ unless( node['java']['install_flavor'] == 'sun')
+ warn "Warning!! You are *strongly* recommended to use Sun Java for #{program}. Set node['java']['install_flavor'] = 'sun' in a role -- right now it's '#{node['java']['install_flavor']}'"
+ end
+ end
+
+ #
+ # Best public or private IP
+ #
+
+ #
+ # Change all occurences of a given line in-place in a file
+ #
+ # @param [String] name - name for the resource invocation
+ # @param [String] filename - the file to modify (in-place)
+ # @param [String] old_line - the string to replace
+ # @param [String] new_line - the string to insert in its place
+ # @param [String] shibboleth - a simple foolproof string that should be
+ # present after this works
+ #
+ def munge_one_line(name, filename, old_line, new_line, shibboleth)
+ execute name do
+ command %Q{sed -i -e 's|#{old_line}|#{new_line}| ' '#{filename}'}
+ not_if %Q{grep -q -e '#{shibboleth}' '#{filename}'}
+ only_if{ File.exists?(filename) }
+ yield if block_given?
+ end
+ end
+
+ end
+end
+
+class Chef::ResourceDefinition ; include ClusterChef::CookbookUtils ; end
+class Chef::Resource ; include ClusterChef::CookbookUtils ; end
+class Chef::Recipe ; include ClusterChef::CookbookUtils ; end
+class Chef::Provider::NodeMetadata < Chef::Provider ; include ClusterChef::CookbookUtils ; end
117 libraries/discovery.rb
@@ -0,0 +1,117 @@
+require File.expand_path('cluster_chef.rb', File.dirname(__FILE__))
+
+module ClusterChef
+
+ #
+ # ClusterChef::Discovery --
+ #
+ # Allow nodes to discover the location for a given component at runtime, adapting
+ # when new components announce.
+ #
+ # Operations:
+ #
+ # * announce a component. A timestamp records the last announcement.
+ # * discover all servers announcing the given component.
+ # * discover the most recent server for that component.
+ #
+ #
+ module Discovery
+ include NodeUtils
+
+ #
+ # Announce that you provide the given component in some realm (by default,
+ # this node's cluster).
+ #
+ # @param [Symbol] sys name of the system
+ # @param [Symbol] subsys name of the subsystem
+ # @param [Hash] opts extra attributes to pass to the component object
+ # @option opts [String] :realm Offer the component within this realm -- by
+ # default, the current node's cluster
+ #
+ def announce(sys, subsys, opts={}, &block)
+ opts = Mash.new(opts)
+ opts[:realm] ||= node[:cluster_name]
+ component = Component.new(node, sys, subsys, opts)
+ Chef::Log.info("Announcing component #{component.fullname}")
+ #
+ component.instance_eval(&block) if block
+ #
+ node.set[:announces][component.fullname] = component.to_hash
+ node_changed!
+ component
+ end
+
+ # Find all announcements for the given system
+ #
+ # @example
+ # discover_all(:cassandra, :seeds) # all cassandra seeds for current cluster
+ # discover_all(:cassandra, :seeds, 'bukkit') # all cassandra seeds for 'bukkit' cluster
+ #
+ # @return [ClusterChef::Component] component from server to most recently-announce
+ def discover_all(sys, subsys, realm=nil)
+ realm ||= discovery_realm(sys,subsys)
+ component_name = ClusterChef::Component.fullname(realm, sys, subsys)
+ #
+ servers = discover_all_nodes(component_name)
+ servers.map do |server|
+ hsh = server[:announces][component_name]
+ hsh[:realm] = realm
+ ClusterChef::Component.new(server, sys, subsys, hsh)
+ end
+ end
+
+ # Find the latest announcement for the given system
+ #
+ # @example
+ # discover(:redis, :server) # redis server for current cluster
+ # discover(:redis, :server, 'uploader') # redis server for 'uploader' realm
+ #
+ # @return [ClusterChef::Component] component from server to most recently-announce
+ def discover(sys, subsys, realm=nil)
+ discover_all(sys, subsys, realm).last or raise("Cannot find '#{component_name}'")
+ end
+
+ def discovery_realm(sys, subsys=nil)
+ node[:discovers][sys][subsys] rescue node[:cluster_name]
+ end
+
+ def node_components(server)
+ server[:announces].map do |name, hsh|
+ realm, sys, subsys = name.split("-", 3)
+ hsh[:realm] = realm
+ p ['node_components', name, realm, sys, subsys]
+ ClusterChef::Component.new(server, sys, subsys, hsh)
+ end
+ end
+
+ # Discover all components with the given aspect -- eg logs, or ports, or
+ # dashboards -- on the current node
+ #
+ # @param [Symbol] aspect in handle form
+ #
+ # @example
+ # components_with(:log)
+ #
+ def components_with(aspect)
+ node_components(self.node).select{|comp| not comp.log.empty? }
+ end
+
+ protected
+ #
+ # all nodes that have announced the given component, in ascending order of
+ # timestamp (most recent is last)
+ #
+ def discover_all_nodes(component_name)
+ all_servers = search(:node, "announces:#{component_name}" ) rescue []
+ all_servers.reject!{|server| server.name == node.name} # remove this node...
+ all_servers << node if node[:announces][component_name] # & use a fresh version
+ Chef::Log.warn("No node announced for '#{component_name}'") if all_servers.empty?
+ all_servers.sort_by{|server| server[:announces][component_name][:timestamp] }
+ end
+
+ end
+end
+
+class Chef::ResourceDefinition ; include ClusterChef::Discovery ; end
+class Chef::Resource ; include ClusterChef::Discovery ; end
+class Chef::Recipe ; include ClusterChef::Discovery ; end
55 libraries/discovery_lol.rb
@@ -0,0 +1,55 @@
+module ClusterChef
+ ::ClusterChef::Discovery.class_eval do
+ # --------------------------------------------------------------------------
+ #
+ # Alternate syntax
+ #
+
+ # alias for #discovers
+ #
+ # @example
+ # can_haz(:redis) # => {
+ # :in_yr => 'uploader_queue', # alias for realm
+ # :mah_bukkit => '/var/log/uploader', # alias for logs
+ # :mah_sunbeam => '/usr/local/share/uploader', # home dir
+ # :ceiling_cat => 'http://10.80.222.69:2345/', # dashboards
+ # :o_rly => ['volumes'], # concerns
+ # :zomg => ['redis_server'], # daemons
+ # :btw => %Q{Queue to process uploads} # description
+ # }
+ #
+ #
+ def can_haz(name, options={})
+ system = discover(name, options)
+ MAH_ASPECTZ_THEYR.each do |lol, real|
+ system[lol] = system.delete(real) if aspects.has_key?(real)
+ end
+ system
+ end
+
+ # alias for #announces. As with #announces, all params besides name are
+ # optional -- follow the conventions whereever possible. MAH_ASPECTZ_THEYR
+ # has the full list of alternate aspect names.
+ #
+ # @example
+ # # announce a redis; everything according to convention except for the
+ # # custom log directory.
+ # i_haz_a(:redis, :mah_bukkit => '/var/log/uploader' )
+ #
+ def i_haz_a(system, aspects)
+ MAH_ASPECTZ_THEYR.each do |lol, real|
+ aspects[real] = aspects.delete(lol) if aspects.has_key?(lol)
+ end
+ announces(system, aspects)
+ end
+
+ # Alternate names for machine aspects. Only available through #i_haz_a and
+ # #can_haz.
+ #
+ MAH_ASPECTZ_THEYR = {
+ :in_yr => :realm, :mah_bukkit => :logs, :mah_sunbeam => :home,
+ :ceiling_cat => :dashboards, :o_rly => :concerns, :zomg => :daemons,
+ :btw => :description,
+ }
+ end
+end
83 libraries/dump_aspects.rb
@@ -0,0 +1,83 @@
+module ClusterChef
+ module Discovery
+ module_function
+
+ def dump_aspects(run_context)
+ [
+ [:cassandra, :server],
+ [:chef_client, :client],
+ # [:dash_dash, :dashboard],
+ [:cluster_chef, :dashboard],
+ [:cron, :daemon],
+ [:elasticsearch, :datanode],
+ [:elasticsearch, :httpnode],
+ [:flume, :client],
+ [:flume, :master],
+ [:ganglia, :master],
+ [:ganglia, :monitor],
+ [:graphite, :carbon],
+ [:graphite, :dashboard],
+ [:graphite, :whisper],
+ [:hadoop, :datanode],
+ [:hadoop, :hdfs_fuse],
+ [:hadoop, :jobtracker],
+ [:hadoop, :namenode],
+ [:hadoop, :secondarynn],
+ [:hadoop, :tasktracker],
+ [:hbase, :master],
+ [:hbase, :regionserver],
+ [:hbase, :stargate],
+ [:nfs, :server],
+ [:nginx, :server],
+ [:ntp, :server],
+ [:redis, :server],
+ [:resque, :dashboard],
+ [:ssh, :daemon],
+ [:statsd, :server],
+ [:zookeeper, :server],
+
+ # [:apache, :server],
+ # [:mongodb, :server],
+ # [:mysql, :server],
+ # [:zabbix, :monitor],
+ # [:zabbix, :server],
+ # [:goliath, :app],
+ # [:unicorn, :app],
+ # [:apt_cacher, :server],
+ # [:bluepill, :monitor],
+ # [:resque, :worker],
+
+ ].each do |sys, component|
+ aspects = announce(run_context, sys, component)
+ pad = ([""]*20)
+ dump_line = dump(aspects) || []
+ puts( "%-15s\t%-15s\t%-23s\t| %-51s\t| %-12s\t#{"%-7s\t"*12}" % [sys, component, dump_line, pad].flatten )
+ end
+
+ run_context.resource_collection.select{|r| r.resource_name.to_s == 'service' }.each{|r| p [r.name, r.action] }
+ end
+
+
+ def dump(aspects)
+ return if aspects.empty?
+ vals = [
+ aspects[:daemon ].map{|asp| asp.name }.join(",")[0..20],
+ aspects[:port ].map{|asp| "#{asp.flavor}=#{asp.port_num}" }.join(","),
+ aspects[:dashboard ].map{|asp| asp.name }.join(","),
+ aspects[:log ].map{|asp| asp.name }.join(","),
+ DirectoryAspect::ALLOWED_FLAVORS.map do |flavor|
+ asp = aspects[:directory ].detect{|asp| asp[:flavor] == flavor }
+ # asp ? "#{asp.flavor}=#{asp.path}" : ""
+ asp ? asp.name : ""
+ end,
+ ExportedAspect::ALLOWED_FLAVORS.map do |flavor|
+ asp = aspects[:exported ].detect{|asp| asp[:flavor] == flavor }
+ # asp ? "#{asp.flavor}=#{asp.files.join(",")}" : ""
+ asp ? asp.name : ""
+ end,
+ ]
+ vals
+ end
+
+ end
+end
18 libraries/metachef.rb
@@ -0,0 +1,18 @@
+$LOAD_PATH.unshift(File.dirname(__FILE__))
+
+# $LOAD_PATH.unshift(File.expand_path('../../../lib'), File.dirname(__FILE__))
+# require 'cluster_chef/dsl_object'
+
+#
+# Dependencies for cluster_chef libraries
+#
+require File.expand_path('attr_struct.rb', File.dirname(__FILE__))
+require File.expand_path('node_utils.rb', File.dirname(__FILE__))
+require File.expand_path('component.rb', File.dirname(__FILE__))
+require File.expand_path('aspect.rb', File.dirname(__FILE__))
+require File.expand_path('discovery.rb', File.dirname(__FILE__))
+
+# require File.expand_path('aspects.rb', File.dirname(__FILE__))
+# require CLUSTER_CHEF_DIR("libraries/aspect")
+# require CLUSTER_CHEF_DIR("libraries/aspects")
+# require CLUSTER_CHEF_DIR("libraries/discovery")
91 libraries/node_utils.rb
@@ -0,0 +1,91 @@
+#
+# Author:: Philip (flip) Kromer for Infochimps.org
+# Cookbook Name:: cluster_chef
+# Library:: node_utils
+#
+# Description::
+#
+# Copyright 2011, Infochimps, Inc
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module ClusterChef
+ #
+ # Useful methods for node metadata:
+ # * best guess for node's private interface / public interface
+ # * force-save node if changed
+ #
+ module NodeUtils
+ module_function # call NodeUtils.foo, or include and call #foo
+
+ #
+ # Public / Private interface best-guessing
+ #
+
+ # The local-only ip address for the given server
+ def private_ip_of(server)
+ server[:cloud][:private_ips].first rescue server[:ipaddress]
+ end
+
+ # The local-only ip address for the given server
+ def private_hostname_of(server)
+ server[:fqdn]
+ end
+
+ # The globally-accessable ip address for the given server
+ def public_ip_of(server)
+ server[:cloud][:public_ips].first rescue server[:ipaddress]
+ end
+
+ #
+ # Attribute helpers
+ #
+
+ # A compact timestamp, to record when services are registered
+ def self.timestamp
+ Time.now.utc.strftime("%Y%m%d%H%M%SZ")
+ end
+
+ #
+ # Saving node
+ #
+
+ def node_changed!
+ @node_changed = true
+ end
+
+ def node_changed?
+ !! @node_changed
+ end
+
+ MIN_VERSION_FOR_SAVE = "0.8.0" unless defined?(MIN_VERSION_FOR_SAVE)
+
+ # Save the node, unless we're in chef-solo mode (or an ancient version)
+ def save_node!(node)
+ return unless node_changed?
+ # taken from ebs_volume cookbook
+ if Chef::VERSION !~ /^0\.[1-8]\b/
+ if not Chef::Config.solo
+ Chef::Log.info('Saving Node!!!!')
+ node.save
+ else
+ Chef::Log.warn("Skipping node save since we are running under chef-solo. Node attributes will not be persisted.")
+ end
+ else
+ Chef::Log.warn("Skipping node save: Chef version #{Chef::VERSION} (prior to #{MIN_VERSION_FOR_SAVE}) can't save");
+ end
+ end
+
+ end
+end
38 metadata.rb
@@ -0,0 +1,38 @@
+maintainer "Philip (flip) Kromer - Infochimps, Inc"
+maintainer_email "coders@infochimps.com"
+license "Apache 2.0"
+long_description IO.read(File.join(File.dirname(__FILE__), 'README.md'))
+version "3.0.3"
+
+description "Cluster orchestration -- coordinates discovery, integration and decoupling of cookbooks"
+
+recipe "cluster_chef::default", "Base configuration for cluster_chef"
+
+%w[ debian ubuntu ].each do |os|
+ supports os
+end
+
+attribute "cluster_chef/conf_dir",
+ :display_name => "",
+ :description => "",
+ :default => "/etc/cluster_chef"
+
+attribute "cluster_chef/log_dir",
+ :display_name => "",
+ :description => "",
+ :default => "/var/log/cluster_chef"
+
+attribute "cluster_chef/home_dir",
+ :display_name => "",
+ :description => "",
+ :default => "/etc/cluster_chef"
+
+attribute "cluster_chef/user",
+ :display_name => "",
+ :description => "",
+ :default => "root"
+
+attribute "users/root/primary_group",
+ :display_name => "",
+ :description => "",
+ :default => "root"
20 recipes/default.rb
@@ -0,0 +1,20 @@
+#
+# Cookbook Name:: cluster_chef
+# Description:: Base configuration for cluster_chef
+# Recipe:: default
+# Author:: Philip (flip) Kromer
+#
+# Copyright 2011, Philip (flip) Kromer
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
33 spec/aspect_spec.rb
@@ -0,0 +1,33 @@
+require File.expand_path(File.dirname(__FILE__) + '/spec_helper')
+require CLUSTER_CHEF_DIR("libraries/cluster_chef")
+require CLUSTER_CHEF_DIR("libraries/aspects")
+
+describe ClusterChef::Aspect do
+ include_context 'dummy_chef'
+
+ let(:foo_aspect){ Class.new(ClusterChef::Aspect){ def self.handle() :foo end } }
+ after(:each) do
+ ClusterChef::Component.keys.delete(:foos)
+ ClusterChef::Component.aspect_types.delete(:foos)
+ end
+
+ it 'knows its handle' do
+ foo_aspect.handle.should == :foo
+ end
+
+ context 'register!' do
+ it 'shows up in the Component.aspect_types' do
+ ClusterChef::Component.aspect_types.should_not include(foo_aspect)
+ foo_aspect.register!
+ ClusterChef::Component.aspect_types[:foos].should == foo_aspect
+ end
+
+ it 'means it is called when a Component#harvest_all aspects' do
+ foo_aspect.register!
+ rc = Chef::RunContext.new(Chef::Node.new, [])
+ foo_aspect.should_receive(:harvest).with(rc, chef_server_component)
+ chef_server_component.harvest_all(rc)
+ end
+ end
+
+end
173 spec/aspects_spec.rb
@@ -0,0 +1,173 @@
+require File.expand_path(File.dirname(__FILE__) + '/spec_helper')
+require CLUSTER_CHEF_DIR("libraries/cluster_chef")
+require CLUSTER_CHEF_DIR("libraries/aspects")
+
+describe 'aspect' do
+ include_context 'dummy_chef'
+
+ def harvest_klass(component)
+ described_class.harvest(chef_context, component)
+ end
+ let(:component){ hadoop_datanode_component }
+ let(:harvested){ harvest_klass(component) }
+ let(:subject){ harvested.values.first }
+
+ describe ClusterChef::PortAspect do
+ it 'harvests any "*_port" attributes' do
+ harvested.should == Mash.new({
+ :dash_port => ClusterChef::PortAspect.new(component, "dash_port", :dash, "50075"),
+ :ipc_port => ClusterChef::PortAspect.new(component, "ipc_port", :ipc, "50020"),
+ :jmx_dash_port => ClusterChef::PortAspect.new(component, "jmx_dash_port", :jmx_dash, "8006"),
+ :port => ClusterChef::PortAspect.new(component, "port", :port, "50010"),
+ })
+ end
+
+ # context '#addrs' do
+ # it 'can be marked :critical, :open, :closed or :ignore'
+ # it 'marks first private interface open by default'
+ # it 'marks other interfaces closed by default'
+ # end
+ # context '#flavor' do
+ # it 'accepts a defined flavor'
+ # end
+ # context '#monitors' do
+ # it 'accepts an arbitrary hash'
+ # end
+ end
+
+ describe ClusterChef::DashboardAspect do
+ it 'harvests any "dash_port" attributes' do
+ harvested.should == Mash.new({
+ :dash => ClusterChef::DashboardAspect.new(component, "dash", :http_dash, "http://33.33.33.12:50075/"),
+ :jmx_dash => ClusterChef::DashboardAspect.new(component, "jmx_dash", :jmx_dash, "http://33.33.33.12:8006/"),
+ })
+ end
+ it 'by default harvests the url from the private_ip and dash_port'
+ it 'lets me set the URL with an explicit template'
+ end
+
+ describe ClusterChef::DaemonAspect do
+ it 'harvests its associated service resource' do
+ harvested.should == Mash.new({
+ :hadoop_datanode => ClusterChef::DaemonAspect.new(component, "hadoop_datanode", "hadoop_datanode", "hadoop_datanode", 'start'),
+ })
+ end
+
+ context '#run_state' do
+ it 'harvests the :run_state attribute' do
+ subject.run_state.should == 'start'
+ end
+ it 'only accepts :start, :stop or :nothing' do
+ chef_node[:hadoop][:datanode][:run_state] = 'shazbot'
+ Chef::Log.should_receive(:warn).with("Odd run_state shazbot for daemon hadoop_datanode: set node[:hadoop][:datanode] to :stop, :start or :nothing")
+ subject.lint
+ end
+ end
+
+ # context '#service_name' do
+ # it 'defaults to' do
+ #
+ # end
+ # end
+
+ # context '#boot_state' do
+ # it 'harvests the :boot_state attribute'
+ # it 'can be set explicitly'
+ # it 'only accepts :enable, :disable or nil'
+ # end
+ # context '#pattern' do
+ # it 'harvests the :pattern attribute from the associated service resource'
+ # it 'is not settable explicitly'
+ # end
+ # context '#limits' do
+ # it 'accepts an arbitrary hash'
+ # it 'harvests the :limits hash'
+ # end
+ end
+
+ describe ClusterChef::LogAspect do
+ let(:component){ flume_node_component }
+ it 'harvests any "log_dir" attributes' do
+ harvested.should == Mash.new({
+ :log => ClusterChef::LogAspect.new(component, "log", :log, ["/var/log/flume"]),
+ })
+ end
+ # context '#flavor' do
+ # it 'accepts :http, :log4j, or :rails'
+ # end
+ end
+
+ describe ClusterChef::DirectoryAspect do
+ let(:component){ flume_node_component }
+ it 'harvests attributes ending with "_dir"' do
+ harvested.should == Mash.new({
+ :conf => ClusterChef::DirectoryAspect.new(component, "conf", :conf, ["/etc/flume/conf"]),
+ :data => ClusterChef::DirectoryAspect.new(component, "data", :data, ["/data/db/flume"]),
+ :home => ClusterChef::DirectoryAspect.new(component, "home", :home, ["/usr/lib/flume"]),
+ :log => ClusterChef::DirectoryAspect.new(component, "log", :log, ["/var/log/flume"]),
+ :pid => ClusterChef::DirectoryAspect.new(component, "pid", :pid, ["/var/run/flume"]),
+ })
+ end
+ it 'harvests non-standard dirs' do
+ chef_node[:flume][:foo_dirs] = ['/var/foo/flume', '/var/bar/flume']
+ directory_aspects = harvest_klass(flume_node_component)
+ directory_aspects.should == Mash.new({
+ :conf => ClusterChef::DirectoryAspect.new(component, "conf", :conf, ["/etc/flume/conf"]),
+ :data => ClusterChef::DirectoryAspect.new(component, "data", :data, ["/data/db/flume"]),
+ :foo => ClusterChef::DirectoryAspect.new(component, "foo", :foo, ["/var/foo/flume", "/var/bar/flume"]),
+ :home => ClusterChef::DirectoryAspect.new(component, "home", :home, ["/usr/lib/flume"]),
+ :log => ClusterChef::DirectoryAspect.new(component, "log", :log, ["/var/log/flume"]),
+ :pid => ClusterChef::DirectoryAspect.new(component, "pid", :pid, ["/var/run/flume"]),
+ })
+ end
+ it 'harvests plural directory sets ending with "_dirs"' do
+ component = hadoop_namenode_component
+ directory_aspects = harvest_klass(component)
+ directory_aspects.should == Mash.new({
+ :conf => ClusterChef::DirectoryAspect.new(component, "conf", :conf, ["/etc/hadoop/conf"]),
+ :data => ClusterChef::DirectoryAspect.new(component, "data", :data, ["/mnt1/hadoop/hdfs/name", "/mnt2/hadoop/hdfs/name"]),
+ :home => ClusterChef::DirectoryAspect.new(component, "home", :home, ["/usr/lib/hadoop"]),
+ :log => ClusterChef::DirectoryAspect.new(component, "log", :log, ["/hadoop/log"]),
+ :pid => ClusterChef::DirectoryAspect.new(component, "pid", :pid, ["/var/run/hadoop"]),
+ :tmp => ClusterChef::DirectoryAspect.new(component, "tmp", :tmp, ["/hadoop/tmp"]),
+ })
+ end
+
+ # it 'finds its associated resource'
+ # context 'permissions' do
+ # it 'finds its mode / owner / group from the associated respo'
+ # end
+ #
+ # context '#flavor' do
+ # def good_flavors() [:home, :conf, :log, :tmp, :pid, :data, :lib, :journal, :cache] ; end
+ # it "accepts #{good_flavors}"
+ # end
+ # context '#limits' do
+ # it 'accepts an arbitrary hash'
+ # end
+ end
+
+ describe ClusterChef::ExportedAspect do
+ # context '#files' do
+ # let(:component){ hbase_master_component }
+ # it 'harvests attributes beginning with "exported_"' do
+ # harvested.should == Mash.new({
+ # :confs => ClusterChef::ExportedAspect.new(component, "confs", :confs, ["/etc/hbase/conf/hbase-default.xml", "/etc/hbase/conf/hbase-site.xml"]),
+ # :jars => ClusterChef::ExportedAspect.new(component, "jars", :jars, ["/usr/lib/hbase/hbase-0.90.1-cdh3u0.jar", "/usr/lib/hbase/hbase-0.90.1-cdh3u0-tests.jar"])
+ # })
+ # end
+ # end
+
+ it 'converts flavor to sym' do
+ subject.flavor('hi').should == :hi
+ subject.flavor.should == :hi
+ end
+ end
+
+ # describe ClusterChef::CookbookAspect do
+ # end
+ #
+ # describe ClusterChef::CronAspect do
+ # end
+
+end
58 spec/attr_struct_spec.rb
@@ -0,0 +1,58 @@
+require File.expand_path(File.dirname(__FILE__) + '/spec_helper')
+require CLUSTER_CHEF_DIR("libraries/cluster_chef.rb")
+
+describe ClusterChef::AttrStruct do
+ let(:car_class) do
+ Class.new do
+ include ClusterChef::AttrStruct
+ dsl_attr :name
+ dsl_attr :model
+ dsl_attr :doors, :kind_of => Integer
+ dsl_attr :engine
+ end
+ end
+ let(:engine_class) do
+ Class.new do
+ include ClusterChef::AttrStruct
+ dsl_attr :name
+ dsl_attr :displacement
+ dsl_attr :cylinders, :kind_of => Integer
+ end
+ end
+
+ let(:chevy_350){ engine_class.new('chevy', 350, 8) }
+ let(:hot_rod){ car_class.new('39 ford', 'tudor', 2, chevy_350) }
+
+ context '#to_hash' do
+ it('succeeds'){ chevy_350.to_hash.should == { 'name' => 'chevy', 'displacement' => 350, 'cylinders' => 8} }
+ it('nests'){ hot_rod.to_hash.should == { "name" => "39 ford", "model" => "tudor", "doors" => 2, "engine"=> { 'name' => 'chevy', 'displacement' => 350, 'cylinders' => 8} } }
+ it('is a Hash'){ hot_rod.to_hash.class.should == Hash }
+ end
+
+ context '#to_mash' do
+ it('succeeds') do
+ msh = chevy_350.to_mash
+ msh.should == Mash.new({ 'name' => 'chevy', 'displacement' => 350, 'cylinders' => 8})
+ msh['name'].should == 'chevy'
+ msh[:name ].should == 'chevy'
+ end
+ it('nests'){ hot_rod.to_mash.should == Mash.new({ "name" => "39 ford", "model" => "tudor", "doors" => 2, "engine"=> { 'name' => 'chevy', 'displacement' => 350, 'cylinders' => 8} }) }
+ it('is a Mash'){ hot_rod.to_mash.class.should == Mash }
+ end
+
+ context '#dsl_attr' do
+ it 'adds a set-or-return accessor' do
+ chevy_350.cylinders(6).should == 6
+ chevy_350.cylinders.should == 6
+ end
+
+ it 'adds the key to .keys' do
+ car_class.keys.should == [:name, :model, :doors, :engine]
+ end
+ it 'does not duplicate or re-order keys' do
+ car_class.new.engine.should be_nil
+ car_class.dsl_attr(:engine, :default => 4)
+ car_class.new.engine.should == 4
+ end
+ end
+end
138 spec/component_spec.rb
@@ -0,0 +1,138 @@
+require File.expand_path(File.dirname(__FILE__) + '/spec_helper')
+require CLUSTER_CHEF_DIR("libraries/cluster_chef.rb")
+require CLUSTER_CHEF_DIR("libraries/aspects")
+
+describe ClusterChef::Component do
+ include_context 'dummy_chef'
+
+ context 'registering aspects' do
+ let(:klass) do
+ klass = Class.new(ClusterChef::Component)
+ klass.has_aspect(ClusterChef::DaemonAspect)
+ klass
+ end
+ let(:component){ klass.new(chef_node, :chef, :client) }
+
+ it 'adds a set-or-return plural accessor' do
+ component.daemons.should == {}
+ component.daemons.should be_a(Mash)
+ component.daemons( Mash.new({ 'hi' => :there}) )
+ component.daemons.should == { 'hi' => :there }
+ component.daemons.should be_a(Mash)
+ lambda{ component.daemons(69) }.should raise_error(/be a kind of Mash/)
+ end
+
+ context 'makes a singular accessor' do
+ let(:megatron){ ClusterChef::DaemonAspect.new(component, :megatron, 'megatron', 'gun', :start) }
+
+ it 'that is set-or-return' do
+ component.daemon(:megatron).should be_nil
+ component.daemon(:megatron, megatron)
+ component.daemon(:megatron).should == megatron
+ component.daemons.should == { 'megatron' => megatron }
+ end
+
+ it 'that lets me manipulate the aspect in a block' do
+ component.daemon(:megatron, megatron)
+ component.daemon(:megatron).pattern.should == 'gun'
+ expected_self = megatron
+ component.daemon(:megatron) do
+ self.pattern('robot')
+ self.should == expected_self
+ end
+ component.daemon(:megatron).pattern.should == 'robot'
+ end
+
+ it 'that auto-vivifies the aspect for the block' do
+ expected_component = component
+ component.daemon(:grimlock) do
+ self.name.should == :grimlock
+ self.component.should == expected_component
+ self.pattern.should == nil
+ self.pattern 'dinosaur'
+ end
+ component.daemon(:grimlock).pattern.should == 'dinosaur'
+ component.daemons.keys.should == ['grimlock']
+ end
+ end
+
+ it 'sees all the registered aspects' do
+ klass.aspect_types.should == Mash.new({ :daemons => ClusterChef::DaemonAspect })
+ end
+ end
+
+ context '.harvest_aspects' do
+ before(:each) do
+ component.harvest_all(chef_context)
+ end
+
+ context 'works on a complex example' do
+ let(:component){ hadoop_datanode_component }
+
+ it('daemon') do
+ component.daemons.should == Mash.new({
+ :hadoop_datanode => ClusterChef::DaemonAspect.new(component, "hadoop_datanode", "hadoop_datanode", "hadoop_datanode", 'start')
+ })
+ end
+ it('port') do
+ component.ports.should == Mash.new({
+ :dash_port => ClusterChef::PortAspect.new(component, "dash_port", :dash, "50075"),
+ :ipc_port => ClusterChef::PortAspect.new(component, "ipc_port", :ipc, "50020"),
+ :jmx_dash_port => ClusterChef::PortAspect.new(component, "jmx_dash_port", :jmx_dash, "8006"),
+ :port => ClusterChef::PortAspect.new(component, "port", :port, "50010"),
+ })
+ end
+ it('dashboard') do
+ component.dashboards.should == Mash.new({
+ :dash => ClusterChef::DashboardAspect.new(component, "dash", :http_dash, "http://33.33.33.12:50075/"),
+ :jmx_dash => ClusterChef::DashboardAspect.new(component, "jmx_dash", :jmx_dash, "http://33.33.33.12:8006/"),
+ })
+ end
+ it('log') do
+ component.logs.should == Mash.new({
+ :log => ClusterChef::LogAspect.new(component, "log", :log, ["/hadoop/log"])
+ })
+ end
+ it('directory') do
+ component.directories.should == Mash.new({
+ :conf => ClusterChef::DirectoryAspect.new(component, "conf", :conf, ["/etc/hadoop/conf"]),
+ :data => ClusterChef::DirectoryAspect.new(component, "data", :data, ["/mnt1/hadoop/hdfs/data", "/mnt2/hadoop/hdfs/data"]),
+ :home => ClusterChef::DirectoryAspect.new(component, "home", :home, ["/usr/lib/hadoop"]),
+ :log => ClusterChef::DirectoryAspect.new(component, "log", :log, ["/hadoop/log"]),
+ :pid => ClusterChef::DirectoryAspect.new(component, "pid", :pid, ["/var/run/hadoop"]),
+ :tmp => ClusterChef::DirectoryAspect.new(component, "tmp", :tmp, ["/hadoop/tmp"]),
+ })
+ end
+ it('exported') do
+ component.exporteds.should == Mash.new({
+ :confs => ClusterChef::ExportedAspect.new(component, "confs", :confs, ["/etc/hadoop/conf/core-site.xml", "/etc/hadoop/conf/hdfs-site.xml", "/etc/hadoop/conf/mapred-site.xml" ]),
+ :jars => ClusterChef::ExportedAspect.new(component, "jars", :jars, ["/usr/lib/hadoop/hadoop-core.jar","/usr/lib/hadoop/hadoop-examples.jar", "/usr/lib/hadoop/hadoop-test.jar", "/usr/lib/hadoop/hadoop-tools.jar" ]),
+ })
+ end
+ end
+
+ end
+
+ context '#node_info' do
+ it 'returns a mash' do
+ chef_server_component.node_info.should be_a(Mash)
+ end
+ it 'extracts the node attribute tree' do
+ chef_server_component.node_info.should == Mash.new({ :user => 'chef', :port => 4000, :server => { :port => 4000 }, :webui => { :port => 4040, :user => 'www-data' } })
+ end
+ it 'overrides system attrs with subsystem attrs' do
+ chef_webui_component.node_info.should == Mash.new({ :user => 'www-data', :port => 4040, :server => { :port => 4000 }, :webui => { :port => 4040, :user => 'www-data' } })
+ end
+ it 'warns but does not fail if system is missing' do
+ Chef::Log.should_receive(:warn).with("no system data in component 'mxyzptlk_shazbot', node 'node[el_ridiculoso-aqui-0]'")
+ comp = ClusterChef::Component.new(dummy_node, :mxyzptlk, :shazbot)
+ comp.node_info.should == Mash.new
+ end
+ it 'warns but does not fail if subsystem is missing' do
+ Chef::Log.should_receive(:warn).with("no subsystem data in component 'chef_zod', node 'node[el_ridiculoso-aqui-0]'")
+ comp = ClusterChef::Component.new(dummy_node, :chef, :zod)
+ comp.node_info.should == Mash.new({ :user => 'chef', :server => { :port => 4000 }, :webui => { :port => 4040, :user => 'www-data' } })
+ end
+ end
+
+end
116 spec/discovery_spec.rb
@@ -0,0 +1,116 @@
+require File.expand_path(File.dirname(__FILE__) + '/spec_helper')
+require CLUSTER_CHEF_DIR("libraries/cluster_chef")
+require CLUSTER_CHEF_DIR("libraries/aspects")
+
+describe ClusterChef::Discovery do
+ include_context 'dummy_chef'
+
+ context '.announce' do
+
+ context 'populates the node[:announces] tree' do
+ before(:each) do
+ ClusterChef::NodeUtils.stub!(:timestamp){ '20090102030405' }
+ end
+ subject{ recipe.node[:announces]['el_ridiculoso-chef-server'] }
+
+ it 'with the component name' do
+ recipe.announce(:chef, :server)
+ recipe.node[:announces].should include('el_ridiculoso-chef-server')
+ end
+ it 'sets a timestamp' do
+ ClusterChef::NodeUtils.should_receive(:timestamp).and_return('20010101223344')
+ recipe.announce(:chef, :server)
+ subject[:timestamp].should == '20010101223344'
+ end
+ it 'defaults the realm to the cluster name' do
+ recipe.node[:cluster_name] = 'grimlock'
+ recipe.announce(:chef, :server)
+ comp = recipe.node[:announces]['grimlock-chef-server']
+ comp[:realm].should == :grimlock
+ end
+ it 'lets me set the realm' do
+ recipe.announce(:chef, :server, :realm => :bumblebee)
+ comp = recipe.node[:announces]['bumblebee-chef-server']
+ comp[:realm].should == :bumblebee
+ end
+ it 'stuffs the component in as a hash' do
+ recipe.announce(:chef, :server)
+ subject.to_hash.should == {
+ 'sys' => :chef, 'subsys' => :server,
+ 'name' => 'chef_server', 'realm' => :el_ridiculoso, 'timestamp' => "20090102030405",
+ 'daemons' => {}, 'ports' => {}, 'dashboards' => {}, 'logs' => {}, 'directories' => {}, 'exporteds' => {},
+ }
+ end
+ it 'lets the node know it changed' do
+ recipe.should_receive(:node_changed!)
+ recipe.announce(:chef, :server)
+ end
+ end
+
+ it 'returns the announced component' do
+ component = recipe.announce(:chef, :server)
+ component.should be_a(ClusterChef::Component)
+ component.fullname.should == 'el_ridiculoso-chef-server'
+ end
+
+ context 'lets me play around in the component' do
+ it 'instance_evals a block' do
+ comp = recipe.announce(:chef, :server) do
+ log(:log){ dirs ['better/than/bad/its/good'] }
+ end
+ # comp.log(:log).dirs.should == ['better/than/bad/its/good']
+ # recipe.node[:announces]['el_ridiculoso-chef-server'].should == {}
+ end
+ end
+ end
+
+ context '.discover_all_nodes' do
+ before(:each) do
+ dummy_recipe.stub!(:search).
+ with(:node, 'announces:el_ridiculoso-hadoop-datanode').
+ and_return( all_nodes.values_at('el_ridiculoso-aqui-0', 'el_ridiculoso-pequeno-0') )
+ dummy_recipe.stub!(:search).
+ with(:node, 'announces:el_ridiculoso-hadoop-tasktracker').
+ and_return( all_nodes.values_at('el_ridiculoso-aqui-0', 'el_ridiculoso-pequeno-0') )
+ dummy_recipe.stub!(:search).
+ with(:node, 'announces:el_ridiculoso-redis-server').
+ and_return( all_nodes.values_at('el_ridiculoso-aqui-0') )
+ dummy_recipe.stub!(:search).
+ with(:node, 'announces:cocina-chef-client').
+ and_return( all_nodes.values )
+ end
+ it 'finds nodes matching the request, sorted by timestamp' do
+ result = dummy_recipe.discover_all_nodes("el_ridiculoso-hadoop-datanode")
+ result.map{|nd| nd.name }.should == ['el_ridiculoso-pequeno-0', 'el_ridiculoso-aqui-0']
+ end
+
+ it 'replaces itself with a current copy in the search results' do
+ result = dummy_recipe.discover_all_nodes("el_ridiculoso-hadoop-datanode")
+ result.map{|nd| nd.name }.should == ['el_ridiculoso-pequeno-0', 'el_ridiculoso-aqui-0']
+ result[1].should have_key(:nfs)
+ end
+ it 'finds current node if it has announced (even when the server\'s copy has not)' do
+ result = dummy_recipe.discover_all_nodes("el_ridiculoso-redis-server")
+ result.map{|nd| nd.name }.should == ['el_ridiculoso-aqui-0']
+ result[0].should have_key(:nfs)
+ end
+ it 'does not find current node if it has not announced (even when the server\'s copy has announced)' do
+ result = dummy_recipe.discover_all_nodes("el_ridiculoso-hadoop-tasktracker")
+ result.map{|nd| nd.name }.should == ['el_ridiculoso-pequeno-0']
+ end
+ it 'when no server found warns and returns an empty hash' do
+ dummy_recipe.should_receive(:search).
+ with(:node, 'announces:el_ridiculoso-hadoop-mxyzptlk').and_return([])
+ Chef::Log.should_receive(:warn).with("No node announced for 'el_ridiculoso-hadoop-mxyzptlk'")
+ result = dummy_recipe.discover_all_nodes("el_ridiculoso-hadoop-mxyzptlk")
+ result.should == []
+ end
+ end
+
+ it 'loads the node from its fixture' do
+ node_json.keys.sort.should == ["apt", "apt_cacher", "aws", "block_device", "chef_environment", "chef_packages", "chef_server", "chef_type", "cloud", "cluster_chef", "cluster_name", "cluster_size", "command", "cpu", "current_user", "discovery", "dmi", "domain", "end", "etc", "facet_index", "facet_name", "filesystem", "firewall", "flume", "fqdn", "ganglia", "groups", "hadoop", "hbase", "hostname", "install_from", "ipaddress", "java", "jruby", "kernel", "languages", "lsb", "macaddress", "memory", "mountable_volumes", "name", "network", "nfs", "node_name", "nodejs", "ntp", "os", "os_version", "pig", "pkg_sets", "platform", "platform_version", "python", "recipes", "redis", "resque", "rstats", "run_list", "runit", "server_tuning", "tags", "thrift", "users", "value_for_platform", "virtualbox", "virtualization", "zookeeper"]
+ chef_node.name.should == 'el_ridiculoso-aqui-0'
+ chef_node[:cloud][:public_ipv4].should == "10.0.2.15"
+ end
+
+end
2,171 spec/fixtures/chef_node-el_ridiculoso-aqui-0.json
@@ -0,0 +1,2171 @@
+{
+ "chef_type": "node",
+ "name": "el_ridiculoso-aqui-0",
+ "chef_environment": "_default",
+ "languages": {
+ "ruby": {
+ "platform": "x86_64-linux",
+ "version": "1.9.2",
+ "release_date": "2011-07-09",
+ "target": "x86_64-unknown-linux-gnu",
+ "target_cpu": "x86_64",
+ "target_vendor": "unknown",
+ "target_os": "linux",
+ "host": "x86_64-unknown-linux-gnu",
+ "host_cpu": "x86_64",
+ "host_os": "linux-gnu",
+ "host_vendor": "unknown",
+ "bin_dir": "/usr/bin",
+ "ruby_bin": "/usr/bin/ruby1.9.2-p290",
+ "gems_dir": "/usr/lib/ruby/gems/1.9.2-p290",
+ "gem_bin": "/usr/bin/gem1.9.2-p290"
+ },
+ "python": {
+ "version": "2.7.1+",
+ "builddate": "Apr 11 2011, 18:13:53"
+ },
+ "php": {
+ "version": "5.3.5-1ubuntu7.3",
+ "builddate": "(cli) (built: Oct"
+ },
+ "java": {
+ "version": "1.6.0_26",
+ "runtime": {
+ "name": "Java(TM) SE Runtime Environment",
+ "build": "1.6.0_26-b03"
+ },
+ "hotspot": {
+ "name": "Java HotSpot(TM) 64-Bit Server VM",
+ "build": "20.1-b02, mixed mode"
+ }
+ },
+ "perl": {
+ "version": "5.10.1",
+ "archname": "x86_64-linux-gnu-thread-multi"
+ }
+ },
+ "kernel": {
+ "name": "Linux",
+ "release": "2.6.38-8-server",
+ "version": "#42-Ubuntu SMP Mon Apr 11 03:49:04 UTC 2011",
+ "machine": "x86_64",
+ "modules": {
+ "nfs": {
+ "size": "330371"
+ },
+ "lockd": {
+ "size": "85732"
+ },
+ "fscache": {
+ "size": "57123"
+ },
+ "nfs_acl": {
+ "size": "12883"
+ },
+ "auth_rpcgss": {
+ "size": "52881"
+ },
+ "sunrpc": {
+ "size": "234297"
+ },
+ "vboxsf": {
+ "size": "39343"
+ },
+ "vesafb": {
+ "size": "13761"
+ },
+ "ppdev": {
+ "size": "17113"
+ },
+ "psmouse": {
+ "size": "73535"
+ },
+ "serio_raw": {
+ "size": "13166"
+ },
+ "parport_pc": {
+ "size": "36959"
+ },
+ "vboxguest": {
+ "size": "232904"
+ },
+ "i2c_piix4": {
+ "size": "13303"
+ },
+ "lp": {
+ "size": "17789"
+ },
+ "parport": {
+ "size": "46458"
+ },
+ "ahci": {
+ "size": "25951"
+ },
+ "libahci": {
+ "size": "26642"
+ },
+ "e1000": {
+ "size": "111862"
+ }
+ },
+ "os": "GNU/Linux"
+ },
+ "os": "linux",
+ "os_version": "2.6.38-8-server",
+ "virtualization": {
+ "system": "vbox",
+ "role": "guest"
+ },
+ "hostname": "aqui",
+ "fqdn": "aqui",
+ "domain": null,
+ "network": {
+ "interfaces": {
+ "eth0": {
+ "type": "eth",
+ "number": "0",
+ "encapsulation": "Ethernet",
+ "addresses": {
+ "08:00:27:60:82:6e": {
+ "family": "lladdr"
+ },
+ "10.0.2.15": {
+ "family": "inet",
+ "broadcast": "10.0.2.255",
+ "netmask": "255.255.255.0"
+ },
+ "fe80::a00:27ff:fe60:826e": {
+ "family": "inet6",
+ "prefixlen": "64",
+ "scope": "Link"
+ }
+ },
+ "flags": [
+ "UP",
+ "BROADCAST",
+ "RUNNING",
+ "MULTICAST"
+ ],
+ "mtu": "1500",
+ "arp": {
+ "10.0.2.3": "52:54:00:12:35:03",
+ "10.0.2.2": "52:54:00:12:35:02"
+ }
+ },
+ "eth1": {
+ "type": "eth",
+ "number": "1",
+ "encapsulation": "Ethernet",
+ "addresses": {
+ "08:00:27:67:8b:d1": {
+ "family": "lladdr"
+ },
+ "33.33.33.12": {
+ "family": "inet",
+ "broadcast": "33.33.33.255",
+ "netmask": "255.255.255.0"
+ },
+ "fe80::a00:27ff:fe67:8bd1": {
+ "family": "inet6",
+ "prefixlen": "64",
+ "scope": "Link"
+ }
+ },
+ "flags": [
+ "UP",
+ "BROADCAST",
+ "RUNNING",
+ "MULTICAST"
+ ],
+ "mtu": "1500",
+ "arp": {
+ "33.33.33.11": "08:00:27:ff:d6:93",
+ "33.33.33.1": "0a:00:27:00:00:00"
+ }
+ },
+ "lo": {
+ "encapsulation": "Loopback",
+ "addresses": {
+ "127.0.0.1": {
+ "family": "inet",
+ "netmask": "255.0.0.0"
+ },
+ "::1": {
+ "family": "inet6",
+ "prefixlen": "128",
+ "scope": "Node"
+ }
+ },
+ "flags": [
+ "UP",
+ "LOOPBACK",
+ "RUNNING"
+ ],
+ "mtu": "16436"
+ }
+ },
+ "default_gateway": "10.0.2.2",
+ "default_interface": "eth0"
+ },
+ "ipaddress": "10.0.2.15",
+ "macaddress": "08:00:27:60:82:6e",
+ "virtualbox": {
+ "public_ips": [
+ "10.0.2.15"
+ ],
+ "private_ips": [
+ "33.33.33.12",
+ "10.0.2.15"
+ ],
+ "host_only_ips": [
+ "33.33.33.12"
+ ],
+ "local_ipv4": "33.33.33.12",
+ "public_ipv4": "10.0.2.15",
+ "public_hostname": "aqui",
+ "host_guest": {
+ "sysprep_exec": null,
+ "sysprep_args": null
+ },
+ "host_info": {
+ "gui": {
+ "language_id": "en_US"
+ },
+ "v_box_ver": "4.1.6",
+ "v_box_ver_ext": "4.1.6",
+ "v_box_rev": "74713"
+ },
+ "guest_info": {
+ "os": {
+ "product": "Linux",
+ "release": "2.6.38-8-server",
+ "version": "#42-Ubuntu SMP Mon Apr 11 03:49:04 UTC 2011",
+ "service_pack": null,
+ "logged_in_users_list": "flip",
+ "logged_in_users": "1",
+ "no_logged_in_users": "false"
+ },
+ "net": {
+ "0": {
+ "v4": {
+ "ip": "10.0.2.15",
+ "broadcast": "10.0.2.255",
+ "netmask": "255.255.255.0"
+ },
+ "mac": "08002760826E",
+ "status": "Up"
+ },
+ "1": {
+ "v4": {
+ "ip": "33.33.33.12",
+ "broadcast": "33.33.33.255",
+ "netmask": "255.255.255.0"
+ },
+ "mac": "080027678BD1",
+ "status": "Up"
+ },
+ "count": "2"
+ }
+ },
+ "guest_add": {
+ "version": "4.1.6",
+ "version_ext": "4.1.6",
+ "revision": "74713"
+ }
+ },
+ "chef_packages": {
+ "ohai": {
+ "version": "0.6.11",
+ "ohai_root": "/usr/lib/ruby/gems/1.9.2-p290/gems/ohai-0.6.11/lib/ohai"
+ },
+ "chef": {
+ "version": "0.10.4",
+ "chef_root": "/usr/lib/ruby/gems/1.9.2-p290/gems/chef-0.10.4/lib"
+ }
+ },
+ "etc": {
+ "passwd": {
+ "root": {
+ "dir": "/root",
+ "gid": 0,
+ "uid": 0,
+ "shell": "/bin/bash",
+ "gecos": "root"
+ },
+ "daemon": {
+ "dir": "/usr/sbin",
+ "gid": 1,
+ "uid": 1,
+ "shell": "/bin/sh",
+ "gecos": "daemon"
+ },
+ "bin": {
+ "dir": "/bin",
+ "gid": 2,
+ "uid": 2,
+ "shell": "/bin/sh",
+ "gecos": "bin"
+ },
+ "sys": {
+ "dir": "/dev",
+ "gid": 3,
+ "uid": 3,
+ "shell": "/bin/sh",
+ "gecos": "sys"
+ },
+ "sync": {
+ "dir": "/bin",
+ "gid": 65534,
+ "uid": 4,
+ "shell": "/bin/sync",
+ "gecos": "sync"
+ },
+ "games": {
+ "dir": "/usr/games",
+ "gid": 60,
+ "uid": 5,
+ "shell": "/bin/sh",
+ "gecos": "games"
+ },
+ "man": {
+ "dir": "/var/cache/man",
+ "gid": 12,
+ "uid": 6,
+ "shell": "/bin/sh",
+ "gecos": "man"
+ },
+ "lp": {
+ "dir": "/var/spool/lpd",
+ "gid": 7,
+ "uid": 7,
+ "shell": "/bin/sh",
+ "gecos": "lp"
+ },
+ "mail": {
+ "dir": "/var/mail",
+ "gid": 8,
+ "uid": 8,
+ "shell": "/bin/sh",
+ "gecos": "mail"
+ },
+ "news": {
+ "dir": "/var/spool/news",
+ "gid": 9,
+ "uid": 9,
+ "shell": "/bin/sh",
+ "gecos": "news"
+ },
+ "uucp": {
+ "dir": "/var/spool/uucp",
+ "gid": 10,
+ "uid": 10,
+ "shell": "/bin/sh",
+ "gecos": "uucp"
+ },
+ "proxy": {
+ "dir": "/bin",
+ "gid": 13,
+ "uid": 13,
+ "shell": "/bin/sh",
+ "gecos": "proxy"
+ },
+ "www-data": {
+ "dir": "/var/www",
+ "gid": 33,
+ "uid": 33,
+ "shell": "/bin/sh",
+ "gecos": "www-data"
+ },
+ "backup": {
+ "dir": "/var/backups",
+ "gid": 34,
+ "uid": 34,
+ "shell": "/bin/sh",
+ "gecos": "backup"
+ },
+ "list": {
+ "dir": "/var/list",
+ "gid": 38,
+ "uid": 38,
+ "shell": "/bin/sh",
+ "gecos": "Mailing List Manager"
+ },
+ "irc": {
+ "dir": "/var/run/ircd",
+ "gid": 39,
+ "uid": 39,
+ "shell": "/bin/sh",
+ "gecos": "ircd"
+ },
+ "gnats": {
+ "dir": "/var/lib/gnats",
+ "gid": 41,
+ "uid": 41,
+ "shell": "/bin/sh",
+ "gecos": "Gnats Bug-Reporting System (admin)"
+ },
+ "nobody": {
+ "dir": "/nonexistent",
+ "gid": 65534,
+ "uid": 65534,
+ "shell": "/bin/sh",
+ "gecos": "nobody"
+ },
+ "libuuid": {
+ "dir": "/var/lib/libuuid",
+ "gid": 101,
+ "uid": 100,
+ "shell": "/bin/sh",
+ "gecos": ""
+ },
+ "syslog": {
+ "dir": "/home/syslog",
+ "gid": 103,
+ "uid": 101,
+ "shell": "/bin/false",
+ "gecos": ""
+ },
+ "ntp": {
+ "dir": "/home/ntp",
+ "gid": 107,
+ "uid": 102,
+ "shell": "/bin/false",
+ "gecos": ""
+ },
+ "sshd": {
+ "dir": "/var/run/sshd",
+ "gid": 65534,
+ "uid": 103,
+ "shell": "/usr/sbin/nologin",
+ "gecos": ""
+ },
+ "vagrant": {