Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Browse files

import buildit 1.1 into git

  • Loading branch information...
commit 00118a1e84e7c05f66965081519f3f4351849b36 0 parents
@mcdonc authored
40 CHANGES.txt
@@ -0,0 +1,40 @@
+1.1 (10/16/2007)
+
+ Namespace declarations in the root configuration file can now leave
+ out the section declaration. In this case the task configuration must
+ provide a section named [default_namespace], which will be selected.
+
+ Fixed an error in the SkelCopier class which allowed recursion into
+ directories that were supposed to be omitted.
+
+ Fixed an error in the SkelCopier class which made it impossible to
+ copy directory trees deeper than one level.
+
+ Added a "destructive" option to the SkelCopier command. The SkelCopier
+ would nornally skip any file where the destination already existed.
+ The "destructive" flag will overwrite existing files.
+
+ Removed documentation for "Fetch", it was incorrect. The Fetch
+ command is too magical to document and should likely be removed
+ from commandlib.
+
+ Fixed incorrect documentation: docs said there should be a [defaults]
+ section in the root.ini file, it should have said there should be
+ a [globals] section in the root.ini file.
+
+ SkelCopier in commandlib did not work properly: it build incorrect
+ target file paths.
+
+1.0 (04/15/2007)
+
+ Allow Fetch task to unpack zip files.
+
+ New builtin global replacement value: 'platform', which is the value
+ of distutils.util.get_platform() if distutils is installed.
+
+ Fixed a bug in the guts of the postorder machinery to not generate
+ too many duplicate tasks.
+
+0.1 (02/04/2007)
+
+ Initial release
24 LICENSE.txt
@@ -0,0 +1,24 @@
+* Copyright (c) 2007, Agendaless Consulting
+* All rights reserved.
+*
+* Redistribution and use in source and binary forms, with or without
+* modification, are permitted provided that the following conditions are met:
+* * Redistributions of source code must retain the above copyright
+* notice, this list of conditions and the following disclaimer.
+* * Redistributions in binary form must reproduce the above copyright
+* notice, this list of conditions and the following disclaimer in the
+* documentation and/or other materials provided with the distribution.
+* * Neither the name of the <organization> nor the
+* names of its contributors may be used to endorse or promote products
+* derived from this software without specific prior written permission.
+*
+* THIS SOFTWARE IS PROVIDED BY Agendaless Consulting ``AS IS'' AND ANY
+* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+* DISCLAIMED. IN NO EVENT SHALL Agendaless Consulting BE LIABLE FOR ANY
+* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
723 README.txt
@@ -0,0 +1,723 @@
+Buildit
+
+ Buildit makes it easier to create a repeatable deployment of
+ software in a particular configuration. With it, you can perform
+ conditional complilation of source code, install software, run
+ scripts, or perform any repeatable sequence of tasks that ends up
+ creating a known set files on your filesystem. On subsequent runs
+ of the same set of tasks, Buildit performs the least amount of work
+ possible to create the same set of files, only performing the work
+ that it detects has not already been performed by earlier runs.
+
+Change History
+
+ 0.1 -- Initial release
+
+ 1.0 -- 4/15/2007, second release, see CHANGES.txt for info
+
+ 1.1 -- (unreleased)
+
+Platforms
+
+ Buildit runs under any platform on which Python supports the
+ os.system command. This includes UNIX/Linux and Windows. It should
+ be run via Python 2.4+.
+
+Installation
+
+ Install Buildit by running the accompanying "setup.py" file, ala
+ "python setup.py install".
+
+License
+
+ The license under which Buildit is released is a BSD-like license.
+ It can be found accompanying the distribution in the LICENSE.txt
+ file.
+
+Rationale
+
+ Buildit was created to allow me to write "buildout" profiles which
+ need to perform arbitrary tasks in the service of setting up a
+ client operating system environment with software and data specific
+ to running applications I helped create. For instance, in one case,
+ it is currently used to build multiple "instances" of Zope, ZEO,
+ Apache/PHP, Squid, Python, and MySQL on a single machine, forming a
+ software "profile" particular to that machine. The same buildout
+ files are rearranged to create different instances of software on
+ different machines at the same site. The same software is used to
+ perform incremental upgrades to existing buildouts.
+
+ Why Not Make?
+
+ We had previously been using GNU Make for the same task, but my
+ clients couldn't maintain the makefiles easily after the
+ engagement ended because they were not willing to learn GNU Make's
+ automatic variables and pattern rules which I used somewhat
+ gratuitously to make my life easier. I also realized that even I
+ could barely read the makefiles after I had been away from them
+ for some time.
+
+ Although make's fundamental behavior is very simple, it has a few
+ problems. Because its basic feature set is so simple, and because
+ it has been proven to need to do some reasonably complex things,
+ many make versions have also accreted features over the years to
+ allow for this complexity. Unfortunately, it was never meant to
+ be a full-blown programming language, and the additions made to
+ make syntax to support complex tasks are fairly mystifying.
+ Additionally, if advanced make features are used within a
+ Makefile, it is difficult to create, debug and maintain makefiles
+ for those with little "make-fu". Even simple failing makefiles
+ can be difficult to debug.
+
+ Why Not Ant?
+
+ I a big fan of neither Java nor hand-writing XML. Ant requires I
+ use one and do the other.
+
+ Why Not SCons?
+
+ SCons is all about solving problems specific to the domain of
+ source code recompilation. Buildit is much smaller, and more
+ general.
+
+ Why Not A-A-P?
+
+ A-A-P was designed from the perspective of someone wanting to
+ develop and install a software package on many different
+ platforms. It does this very well, but loses some generality in
+ the process. A-A-P also uses a make-like syntax for describing
+ tasks, whereas Buildit uses Python.
+
+ Why not zc.buildout?
+
+ zc.buildout was released after Buildit was already mature.
+ Additionally, zc.buildout appears to have a focus on Python eggs
+ which Buildit does not.
+
+General Comparisons to Other Dependency Systems
+
+ Buildit is, for better or worse, completely general and very simple.
+ It performs OS shell tasks and calls in to arbitrary Python as
+ necessary only as specified by the recipe-writer, rather than
+ relying on any domain-specific implicit rules.
+
+ Buildit includes no built-in provisions for building C/Java/C++/etc
+ source to object code via implicit or user-defined pattern rules.
+ In fact, it knows nothing whatsoever about creating software from
+ source files into binaries.
+
+ Unlike makefiles, Buildit "recipe" files have no intrinsic syntax.
+ There are no tabs-vs-spaces issues, default rules, automatic
+ variables, or any special kind of syntax at all. In Buildit, recipe
+ files are defined within Python. If conditionals, looping,
+ environment integration, or other advanced features become necessary
+ within one of your recipes, rather than needing to spell these
+ things within a special syntax, you just use Python instead.
+
+ Unlike make, Buildit does not have the capability to perform
+ parallel execution of tasks (although it will not prevent it from
+ happening when it calls into make itself).
+
+Tasks
+
+ A Buildit "task" is equivalent to a "rule" in GNUmake. It is the
+ fundamental "unit of work" within buildit. A task has a name, a set
+ of targets, a working directory, a set of commands, and a set of
+ dependent tasks. These are described here.
+
+ name -- A task's name is a freeform bit of text describing the
+ purpose of the task. E.g. "configure python". Only one name may be
+ provided for a task. A name is required.
+
+ namespaces -- A task's namespaces name is a list or a string
+ representing the namespace(s) to which this task belongs. "Local"
+ references interpreted when executing the task will use replacement
+ values from each namespace.
+
+ targets -- The files that are created as a result of a successful
+ run of this task. A task's targets are strings specifying the file
+ that will be created as a result of this task. It may include
+ buildit interpolation syntax (e.g. '${pkgdir}'), which will be
+ resolved against the namespace set just while the task is performed.
+ Relative target paths are considered relative to the workdir.
+
+ workdir -- A task's workdir specifies the directory to which the OS
+ will chdir before performing the commands implied by the task. Only
+ one workdir may be specified. A workdir is optional, it needn't be
+ specified in the task argument list.
+
+ commands -- A task's command set is a list or tuple specifying the
+ commands that do the work implied by the task, which, as a general
+ rule, should involve creating the target file. The command set is
+ typically a sequence of strings, although in addition to strings,
+ special Python callable objects may be specified as a command. The
+ strings that make up commands are resolved against the replacement
+ dictionary for string interpolation. If only one string command is
+ specified, it may be specified without embedding it in a list or a
+ tuple (the same does not hold true for a single callable Python
+ object used as a command, it must be embedded in a list or tuple).
+
+ dependencies -- A task's dependency set is a sequence of other Task
+ instances upon which this task depends. This is the way a
+ dependency graph of tasks is formed. If only one dependency is
+ specified, it may be specified without embedding it in a sequence.
+
+Task Example
+
+ Here is an example task, which implies the work required to run
+ 'configure' within a Python source tree::
+
+ configure = Task(
+ 'configure python',
+ namespaces = 'python',
+ targets = '${sharedir}/build/${pkgdn}/Makefile',
+ workdir = '${sharedir}',
+ # we build Python using --enable-shared in order to allow plpython
+ # to build against us on non-32-bit systems. It appears that this
+ # isn't necessary on 32-bit systems (neither Linux nor Mac), but
+ # required at least for x86_64 Linux.
+ commands = [
+
+ "mkdir -p build/${pkgdn}",
+
+ "cd build/${pkgdn} && ${sharedir}/src/${pkgdn}/configure \
+ --prefix=${sharedir}/opt/${pkgdn} --enable-shared",
+
+ ],
+ dependencies = (unpack, gcc4_patch_readline_c, py243_socket_patch)
+ )
+
+ The Description
+
+ The description of a task is just a string label. It is printed
+ when Buildit is run to help you track down problems and give users
+ a sense of what is happening when your recipes are run. It is
+ required. In the above example, the description is 'configure
+ python'.
+
+ The Namespaces
+
+ The namespaces of a task represent each namespace which it will
+ attempt to use to resolve local names (e.g. ${./local} names). In
+ the above example, the namespace is 'python'.
+
+ If a task is provided a namespaces argument which is a single
+ string with no spaces it it, it will be considered to have a
+ single namespace.
+
+ A task may have multiple namespaces. If a task has multiple
+ namespaces, it will be executed once for each namespace in the
+ list provided. For convenience, if a string with spaces in it is
+ provided as the 'namespaces' attribute, it is parsed into a list
+ of namespace names (this is mostly to work around the inability to
+ define lists easily in ConfigParser format).
+
+ The Targets
+
+ The targets of a task are the files that are meant to be created
+ by the commands specified within the task. Although the commands
+ of a task may create many files and perform otherwise arbitrary
+ actions, the target files are the files that must be created for
+ Buildit itself to consider the task "complete". It may (and almost
+ certainly will) require replacement interpolation. If only one
+ target file is required, it can be specified as a string. If more
+ than one target file is necessary, they must be supplied as
+ strings within a Python sequence. We only have one target above.
+ The target of our example above is
+ '${sharedir}/build/${pkgdn}/Makefile'.
+
+ A target file is not considered to be specified relative to the
+ working directory: it must be an absolute path or must be
+ specified relative to the current working directory from which the
+ Buildit driver is invoked. However, it can contain interpolation
+ syntax that will be resolved against the replacement object.
+
+ A target is optional. If a task has no targets, it will be run
+ unconditionally by Buildit on each invocation of the recipe in
+ which it is contained.
+
+ If all of a task's commands are run and the target files are not
+ subsequently available on the filesystem, Buildit will throw an
+ error.
+
+ Buildit automatically "touches" target files after they've been
+ created on the filesystem, so the date of all target files after a
+ Buildit run will be close to "now", so there's no need to "touch"
+ the target files manually.
+
+ The Working Directory
+
+ In the example task above, we specify a working directory
+ ('${sharedir}'). The working directory indicates the directory
+ into which we will tell the OS to chdir to before performing the
+ commands indicated by the task. This is useful because it allows
+ us to specify relative paths in commands which follow. When the
+ task is finished, the working directory is unconditionally reset
+ to the working directory that was effective before the task
+ started. Task working directories take effect for only the
+ duration of the task. Using a workdir is optional. If a workdir
+ is not specified, the commands of the task will execute in the
+ context of the working directory of the shell used to invoke the
+ recipe file.
+
+ The Commands
+
+ In the example task shown above, we've specified two
+ commands. The first one is::
+
+ "mkdir -p build/${pkgdn}"
+
+ The second
+ is::
+
+ "cd build/${pkgdn} && ${sharedir}/src/${pkgdn}/configure \
+ --prefix=${sharedir}/opt/${pkgdn} --enable-shared"
+
+ Each command is a shell command. In this case, the shell commands
+ are UNIX shell commands.
+
+ The first command creates a build directory (in this case,
+ relative to the workdir directory '${sharedir}'). The second
+ changes the working directory to the newly-created build directory
+ and runs the 'configure' script in the Python source tree with the
+ "prefix" and "enable-shared" options. Note that each commands is
+ interpolated against the namespace provided to the task. Thus if
+ 'pkgdn' was 'Python-2.4.3' and 'sharedir' was '/tmp', the command
+ would be expanded during execution like so::
+
+ mkdir -p build/Python-2.4.3
+
+ cd build/Python-2.4.3 && /tmp/src/Python-2.4.3/configure \
+ --prefix=/tmp/opt/Python-2.4.3 --enable-shared
+
+ Note that the commands are executed serially in the order
+ specified within the command set. Each command specified as a
+ string is executed by Python's 'os.system'. If any command fails,
+ (where "failure" is interpreted as a shell command exiting with a
+ nonzero exit code), an error will be raised.
+
+ Commands that aren't strings are assumed to be Python callables.
+ This is not evident in the above example, but you may provide as a
+ command a Python callable with a particular interface (see the
+ commandlib module for examples). These kinds of commands are not
+ executed by Python's 'os.system'; instead the callable is expected
+ to do the work itself instead of delegating to the OS shell,
+ although it is free to do whatever it needs to do (eg. the
+ callable may do its own delegation to the OS shell if necessary).
+
+ The Dependencies
+
+ The dependency set of a task (specified by 'dependencies' in a
+ Task constructor) identifies other Task instances upon which this
+ task is dependent. "Is dependent" in the previous sentence means
+ that the dependent task(s) must be completed before the task which
+ declares it as a dependency may be run. Buildit has a simple
+ algorithm for determining task and dependency "completeness"
+ specified within "Task Recompletion Algorithm" later in this
+ document.
+
+ The dependency set of the example above is '(unpack,
+ gcc4_patch_readline_c, py243_socket_patch)', which implies that
+ the tasks named 'unpack', 'gcc4_patch_readline_c', and
+ 'py243_socket_patch' must be completed before we can run the
+ 'configure' task. The dependent tasks are not shown in the
+ example, but for example, the 'unpack' task is presumably the task
+ that places the Python source files into
+ '${sharedir}/src/${pkgdn}'.
+
+ A task needn't specify any dependencies. A task may specify a
+ single dependency as a reference to a single Task instance, or it
+ may specify a sequence of references to Task instances by
+ embedding them in a list.
+
+Task Hints
+
+ Tasks should be written with the expectation that they will be run
+ more than once. For instance, if you create a symlink a directory
+ within a command, the command should first check if a symlink
+ already exists at that location, or you'll quite possibly end up
+ symlinking the directory inside the existing symlinked directory on
+ subsequent runs.
+
+Namespaces
+
+ A Buildit namespace is a mapping of names to values. These mappings
+ are used within tasks to perform textual variable replacement (which
+ is also known as "interpolation").
+
+ Namespaces are user-defined. Multiple user-defined namespaces will
+ typically exist during a given Buildit execution. All values in a
+ given namespace are typically related to each other. For example, a
+ 'squidinstance' namespace might represent all of the names and
+ replacement values required to create an instance of the Squid proxy
+ server. A 'pound' namespace might represent all of the names and
+ replacement values required to create an installation of the Pound
+ load balancer.
+
+ A namespace is typically declared within one "section" of a
+ Windows-style ".INI" file. Names within a namespace must consist
+ solely of alphanumeric characters, the underscore, and the minus
+ sign. The value for a name can be any set of characters and may
+ also contain zero or more placeholders which mention other names
+ that should be interpolated. These interpolation targets are known
+ as "references", and they consist of a set of characters surrounded
+ by squiggly brackets prefixed with a dollar sign
+ (e.g. '${setofcharacters}'). Names and values are separated by any
+ number of whitespace characters on either side of an equal sign.
+
+ Here's an example of contents that might go into a namespace .INI
+ file::
+
+ [anamespace]
+ name1 = this is value one
+ name2 = ${./name1} is relative to this namespace
+ name3 = ${globalname}
+ name4 = ${external/name}
+ name5 = ${./name1} hello ${globalname}
+
+ If you are examining the above example, you might note that there
+ are four main types of strings which are allowed to compose a
+ value:
+
+ - String literals. In the above example, the string literal "this
+ is value one" is assigned to the 'name1' name.
+
+ - Rererences to "global" names, which are names which must be found
+ in the "default" namespace. In the example above, the value of
+ name 'name3' has a reference to the global name 'globalname'.
+ Global names never have a slash character in them; they are
+ always simple names without any prefixes.
+
+ - References to "external" names, which are names that are found in
+ other namespaces. In the example above, 'name4' refers to one
+ external name, "${external/name}", which refers to the name
+ 'name' in the external namespace named 'external'. External
+ names always contain one slash, which separates the namespace
+ name from the name that is to be looked up. If the 'external'
+ namespace contained a name called 'name' with a value of 'foo',
+ the external reference in the example would resolve to "foo".
+
+ - References to "local" names, which are pointers to the values of
+ names which are found in the same namespace as the name being
+ defined. In the above example, the expansion of the value
+ "${./name1} is relative to this namespace" in the local name
+ 'name2' would become "this is value one is relative to this
+ namespace". A "local" name always starts with the prefix "./".
+ Essentially, local names are external names where the namespace
+ name is ".".
+
+The Root .INI File
+
+ In order to begin a Buildit project, first create a file named
+ "root.ini" with the following content in Windows-style .INI
+ format)::
+
+ [globals]
+ tgtdir = ${cwd}/sandbox
+
+ [namespaces]
+ foo = ${buildoutdir}/foo.ini [1.0]
+ bar = ${buildoutdir}/bar/ini [1.0]
+ baz = ${buildoutdir}/baz.ini
+
+ It really doesn't matter what you name this file but for sake of
+ reference let's say it's named "root.ini".
+
+ Note that the file consists of two sections: a 'globals' section,
+ and a 'namespaces' section.
+
+ The 'globals' section allows you to define names and values which
+ end up in the "global" namespace (see the 'Namespaces' section for a
+ definition).
+
+ The 'namespaces' section allows you to declare namespaces that will
+ be used during the execution of Buildit. One or more lines may be
+ defined within the namespaces section. Each line defines a
+ namespace, and is composed of the following:
+
+ - a name. In the above example, the namespaces 'foo' and 'bar' are
+ declared.
+
+ - a filename and an optional section name. In the above example,
+ the section named "1.0" in the file named "${buildoutdir}/foo.ini"
+ is used for the foo namespace. A space must separate the filename
+ and the section name, and the section name must be surrounded by
+ brackets. If no explicit section name is provided the file must
+ contain a section named [default_namespace], which will be used.
+
+ In all values within the root .INI file (the default values or the
+ namespace values), you can use the following "built-in" global
+ replacement values:
+
+ ${cwd} -- the fullly-qualified path to the initial working
+ directory of the process which invoked buildit.
+
+ ${buildoutdir} -- the fullly-qualified path to the directory which
+ contains the Python file that first invokes Buildit.
+
+ ${username} -- the user name of the user who invoked the Python
+ file that first invokes Buildit.
+
+ ${platform} -- the value returned by distutils util.get_platform()
+ function
+
+ These names are also available in the default namespace when
+ declaring values for other namespaces and running buildit tasks.
+
+The Namespace .INI Files
+
+ Each name in the 'namespaces' section referred to within the "root'
+ .INI file must exist on disk. Additionally, the section within the
+ file that is mentioned on the value line must be contained within
+ the named file.
+
+ If your root .INI file declares the following namespace section::
+
+ [namespaces]
+ breakfast = ${buildoutdir}/breakfast.ini [1.0]
+ lunch = ${buildoutdir}/lunch.ini
+ dinner = ${buildoutdir}/dinner.ini [1.0]
+
+ .. then three additional .ini files need to exist on your filesystem,
+ 'breakfast.ini', 'lunch.ini' and 'dinner.ini'. In general, these
+ should live relative to the directory in which the python file which
+ will initially invoke buildit (the "buildoutdir") lives. And in this
+ case, 'breakfast.ini' and 'dinner.ini' both need to have a section
+ named '1.0' which contains one or more key/value pairs that make up
+ the namespace content. 'lunch.ini' must define a 'default_namespace'
+ section since a section of that name is selected when no explicit
+ section information is given.
+
+ Without explaining much about what it means, here's an example of
+ what might go in the "breakfast.ini" we've threatened to define
+ above within our root .ini file::
+
+ [1.0]
+ orderer = ${username}
+ coffeesize = large
+ coffeetype = espresso
+ coffeeorder = ${./coffeesize} ${./coffeetype}
+ bageltype = plain
+
+ The "lunch.ini" file may look like this::
+
+ [default_namespace]
+ orderer = ${username}
+ coffeesize = small
+ coffeetype = espresso
+ coffeeorder = ${./coffeesize} ${./coffeetype}
+ breadtype = rye bread
+
+ Here's an additional example of what might go into the "dinner.ini"
+ we've additionally threatened to define::
+
+ [1.0]
+ orderer = ${username}
+ coffeesize = small
+ coffeetype = ${breakfast/coffeetype}
+ coffeeorder = ${./coffeesize} ${./coffeetype}
+ breadtype = dinner roll
+
+ What's most interesting about what will "fall out" of these
+ definitions is the result of the variable expansion. Again, without
+ explaining why, assuming the current user name of the account
+ running the Buildit process is "chrism", here's what the
+ replacements would expand to in the 1.0 section of "breakfast.ini"::
+
+ orderer = chrism
+ coffeesize = large
+ coffeetype = espresso
+ coffeeorder = large espresso
+ bageltype = plain
+
+ The "lunch.ini" default_namespace section would expand to these
+ values::
+
+ orderer = chrism
+ coffeesize = small
+ coffeetype = espresso
+ coffeeorder = small espresso
+ breadtype = rye bread
+
+ And here's what the replacements would expand to in the 1.0 section
+ of "dinner.ini"::
+
+ orderer = chrism
+ coffeesize = small
+ coffeetype = espresso
+ coffeeorder = small espresso
+ breadtype = dinner roll
+
+ During namespace processing, note that replacement targets can be
+ replaced with global, local, or external values.
+
+ It is an error to create two files which contain namespaces which
+ depend on each other's names circularly, and it's an error to refer
+ to a local, global, or external name that cannot be resolved because
+ it does not exist. When Buildit is run, these types of errors are
+ detected and presented to the person executing the Buildit script
+ before any work is actually performed.
+
+Driving Buildit
+
+ An example of kicking off a Buildit process::
+
+ # framework hair
+ from buildit.context import Context
+ from buildit.context import Software
+
+ # your defined "tasks"
+ from mytasks import mkbreakfast
+ from mytasks import mkdinner
+
+ # read default root .ini from file named "/etc/root.ini" and contextualize
+ context = Context('/etc/root.ini')
+
+ # use section 1.1 for dinner namespace instead of default named in root.ini
+ context.set_section('dinner', '1.1')
+
+ # use a different file and section for breakfast namespace instead of default
+ context.set_file('breakfast', '${buildoutdir}/breakfast2.ini',
+ 'coolbreakfast')
+
+ # create a Software instance for both breakfast and dinner
+ breakfast = Software(mkbreakfast, context)
+ dinner = Software(mkdinner, context)
+
+ # override the section value used for coffeetype and install
+ breakfast.set('coffeetype', 'americano')
+ breakfast.install()
+
+ # override the section value used for breadtype and install
+ dinner.set('breadtype', 'wheat')
+ dinner.install()
+
+Driving Buildit More Declaratively via Config File "Instance" Sections
+
+ Optionally, instead of using Python to drive buildit completely
+ procedurally, you may choose to define sections within your root
+ initialization file which represent software "instances" in the form
+ (e.g.)::
+
+ [breakfast:instance]
+ buildit_task = mytasks.mkbreakfast
+ buildit_order = 10
+ coffeetype = americano
+
+ [dinner:instance]
+ buildit_task = mytasks.mydinner
+ buildit_order = 20
+ breadtype = wheat
+
+ Section headers for instance definitions must end in ':instance'.
+ The text that comes before ':instance' is purely informational.
+ They must include a "buildit_task" value, which should be a Python
+ dotted name identifier which points to a task instance. It can
+ optionally include a "buildit_order" integer value, which represents
+ the instance's execution order relative to the other instances
+ defined. Lower numbered instances are run first. If an instance
+ does not have a buildit_order, it is the same as providing it with a
+ zero as a value.
+
+ The example above does the same thing the procedural example does
+ above, except we cannot substitute a different ini file or a
+ different section name for the breakfast task (there is no analogue
+ to set_file or set_section). All key/value pairs within the section
+ which do not begin with "buildit_" are used as override variables
+ for the namespace of the task named by the buildit_task dotted name.
+ You can choose to put a namespace name after the dotted name value
+ of buildit_task (e.g. 'buildit_task = mytasks.mydinner [breakfast]')
+ to change the namespace in which these overrides will be performed.
+
+ Once you've added instance sections, you can drive your buildout by
+ using the boilerplate script (assuming /etc/root.ini is your config
+ file)::
+
+ from buildit.context import Context
+
+ def main(root_ini):
+ context = Context(root_ini)
+ context.install()
+
+ if __name__ == '__main__':
+ import sys
+ main('/etc/root.ini')
+
+Task Recompletion Algorithm
+
+ A task is considered to be complete if all of the following
+ statements can be made about it:
+
+ - it specifies one or more target files in the task definition
+
+ - all of its target files exist
+
+ If a task does not meet these completion requirements at any given
+ time, on a subsequent run of the recipe file in which it defined (or
+ from a recipe file in which it is imported and used), its commands,
+ *and all the commands of the tasks which are dependent upon it* will
+ be rerun in dependency order.
+
+ Buildit (unlike make) does not take into account the timestamp of a
+ task's dependent targets when assessing whether a task needs to be
+ recompleted.
+
+Command Library
+
+ The buildit command library includes a number of standard command
+ types that can be used in place of shell commands in the 'commands='
+ argument to a task. Each argument to the command can be a string
+ literal or it can be a string which include expansion markers.
+ These commands are available via 'from buildit.commandlib import X"
+ where X includes:
+
+ Download -- Download(filename, url, remove_on_error=True).
+
+ CVSCheckout -- Checkout(repo, dir, module, tag=''). repo is the
+ :ext: or :pserver: name of the repository including its path on the
+ remote server's disk, dir is the directory into which we wish to
+ check the module out, module is the CVS module name of the module,
+ tag is the tag name, including the '-r '.
+
+ Symlink -- Symlink(frm, to)
+
+ Patch -- Patch(file, patch, level). 'patch' is the patchfle to
+ apply to 'file', 'level' is the argument to apply to
+
+ InFileWriter -- InFileWriter(infile, outfile, mode=0755). Replaces
+ all <<HUGGED>> text in infile with an expansion based on the current
+ namespace and the global namespace. Changes mode of outfile to
+ mode.
+
+ Substitute -- Substitute(filename, search_re, replacement_string,
+ backupext='.~subst'). Replaces all text in 'filename' matching the
+ regular expression string 'search_re' with the replacement_string.
+ Make a backup of the original with the original filename plus the
+ backup extension.
+
+ SkelCopier -- SkelCopier(skeldir, tgtdir, destructive=''). Makes an
+ exact copy of one directory ('skeldir') to another ('tgtdir'). If a
+ file in the source directory has a .in extension, replace its
+ <<HUGGED>> values with task interpolated values, and write it to the
+ target directory without the .in extension. By default, the SkelCopier
+ will not overwrite target files that already exist underneath tgtdir.
+ If you pass the "destructive" argument with a string like 'yes' or
+ 'true' the behavior will change and all existing target files will be
+ overwritten.
+
+Reporting Bugs
+
+ Please use "the buildit issue
+ tracker":http://agendaless.com/Members/chrism/software/buildit_issues
+ to report issues.
+
+Maillist
+
+ The "buildit
+ maillist:"http://lists.palladion.com/mailman/listinfo/buildit is
+ where to discuss buildit-related issues.
+
+Have fun!
+
+Chris McDonough (chrism@agendaless.com)
+
7 TODO.txt
@@ -0,0 +1,7 @@
+- Tasks should require local namespace references for namespace-local
+ names in commands, targets, and workdirs.
+
+- Have an "environment" namespace for environment variables? Or look
+ up envvars too when resolving a global?
+
+- allow "namespace='${NAMESPACES/aname}'" ?
1  __init__.py
@@ -0,0 +1 @@
+# this is a package
361 commandlib.py
@@ -0,0 +1,361 @@
+"""
+Commands with more explicit purposes than a generic ShellCommand;
+it's debatable whether these are useful or more readable than just inlining
+the shell command text, but they seem to be a reasonable way to factor things
+so the implementation of common operations be changed more easily from a
+central place.
+
+A command instance must have at least these methods:
+
+represent(task): Display a human-readable represention of the command.
+ It accepts a Task instance as an argument.
+
+execute(task): Perform the action asssociated with the command. It
+ accepts a Task instance as an argument.
+
+check(task): Ensure that the command could be run properly, but do not run it.
+ It accepts a Task instance as an argument.
+"""
+
+import os
+import sys
+import re
+import shutil
+import tempfile
+
+from task import ShellCommand
+from resolver import MissingDependencyError
+
+class Download(ShellCommand):
+ def __init__(self, filename, url, remove_on_error=True):
+ self.filename = filename
+ self.url = url
+ self.remove_on_error = remove_on_error
+
+ def represent(self, task):
+ filename = task.interpolate(self.filename)
+ url = task.interpolate(self.url)
+ try:
+ return 'wget -O %s %s' % (filename, url)
+ except:
+ if self.remove_on_error:
+ try:
+ os.remove(filename)
+ except:
+ pass
+ raise
+
+ check = represent
+
+class CVSCheckout:
+ def __init__(self, repo, dir, module, tag=''):
+ self.repo = repo
+ self.dir = dir
+ self.module = module
+ self.tag = tag
+
+ def represent(self, task):
+ resolved = task.interpolate_kw(**self.__dict__)
+ return 'cvs -d %(repo)s co -d %(dir)s %(tag)s %(module)s' % resolved
+
+ def execute(self, task):
+ env_munged = False
+ try:
+ if not os.environ.has_key('CVS_RSH'):
+ os.environ['CVS_RSH'] = 'ssh'
+ env_munged = True
+ os.system(self.represent(task))
+ finally:
+ if env_munged:
+ del os.environ['CVS_RSH']
+
+ check = represent
+
+SYMLINK = (
+ 'expected_frm=%(frm)s; link_to=%(to)s; existing_frm=""; '
+ '[ -e "$link_to" ] && x=$(ls -l "$link_to"); existing_frm="${x#*-> }"; '
+ '[ "$expected_frm" = "$existing_frm" ] || rm -f "$link_to"; '
+ '[ -L %(to)s ] || ln -sf %(frm)s %(to)s'
+ )
+
+class Fetch(ShellCommand):
+ SVN_COMMAND = 'svn co %(url)s %(name)s%(versionstr)s'
+ CVS_COMMAND = 'cvs -d %(url)s co -d %(name)s%(versionstr)s %(path)s'
+ def __init__(self, url, name, using, path, version=''):
+ self.url = url
+ self.name = name
+ self.using = using
+ self.version = version
+ self.path = path
+
+ def get_versionstr(self):
+ if self.version:
+ versionstr = '-%s' % self.version
+ else:
+ versionstr = ''
+ return versionstr
+
+ def represent(self, task):
+ stuff = task.interpolate_kw(**self.__dict__)
+ using = stuff['using']
+ stuff['versionstr'] = task.interpolate(self.get_versionstr())
+ if using == 'download':
+ return '<Download fetcher for "%(url)s">' % stuff
+ elif using == 'svn':
+ return self.SVN_COMMAND % stuff
+ elif using == 'cvs':
+ return self.CVS_COMMAND % stuff
+ return '<Unknown fetcher type>'
+
+ def execute(self, task):
+ stuff = task.interpolate_kw(**self.__dict__)
+ using = stuff['using']
+ stuff['versionstr'] = task.interpolate(self.get_versionstr())
+
+ if using == 'svn':
+ result = os.system(self.SVN_COMMAND % stuff)
+ if result == 256:
+ # try 'svn cleanup' once, this doesn't work if the
+ # package has externals though; subversion utterly blows
+ result = self.svn_cleanup(task)
+ if result:
+ task.output("svn cleanup failed")
+ return result
+ result = os.system(self.SVN_COMMAND % stuff)
+
+ elif using == 'cvs':
+ return os.system(self.CVS_COMMAND % stuff)
+
+ elif using == 'download':
+ stuff['tempfile'] = tempfile.mktemp()
+ remove = 'rm -rf %(name)s%(versionstr)s' % stuff
+ task.output(remove)
+ result = os.system(remove)
+ if not result:
+ download = 'wget -q -O %(tempfile)s %(url)s' % stuff
+ task.output(download)
+ result = os.system(download)
+ if not result:
+ try:
+ ftype = 'file %(tempfile)s' % stuff
+ task.output(ftype)
+ fp = os.popen(ftype)
+ text = fp.read()
+ result = fp.close()
+ if not result:
+ if text.lower().find('gzip') != -1:
+ untgz = 'tar xzf %(tempfile)s' % stuff
+ task.output(untgz)
+ result = os.system(untgz)
+ elif text.lower().find('zip') != -1:
+ unzip = 'unzip %(tempfile)s' % stuff
+ task.output(unzip)
+ result = os.system(unzip)
+ else:
+ result = 1
+
+ if not result and stuff['versionstr']:
+ move = ('mv %(name)s %(name)s%(versionstr)s' %
+ stuff)
+ task.output(move)
+ result = os.system(move)
+ finally:
+ os.unlink(stuff['tempfile'])
+
+ else:
+ raise ValueError(
+ 'unknown "using" value %s for Fetch in task %s' %
+ (using, task)
+ )
+ return result
+
+
+ check = represent
+
+ def svn_cleanup(self, task):
+ stuff = task.interpolate_kw(**self.__dict__)
+ stuff['versionstr'] = task.interpolate(self.get_versionstr())
+ command = 'svn cleanup %(name)s%(versionstr)s' % stuff
+ task.output("cleanup required: running %r" % command )
+ result = os.system(command)
+ return result
+
+
+class Symlink(ShellCommand):
+ def __init__(self, frm, to):
+ self.frm = frm
+ self.to = to
+
+ def represent(self, task):
+ resolved = task.interpolate_kw(**self.__dict__)
+ return SYMLINK % resolved
+
+ check = represent
+
+PATCH = (
+ 'test -e %(file)s.origorig || (cp %(file)s %(file)s.aside && '
+ 'patch -f -p%(level)s < %(patch)s && mv %(file)s.aside %(file)s.origorig)'
+ )
+
+class Patch(ShellCommand):
+ def __init__(self, file, patch, level):
+ self.file = file
+ self.patch = patch
+ self.level = level
+
+ def represent(self, task):
+ resolved = task.interpolate_kw(**self.__dict__)
+ return PATCH % resolved
+
+ check = represent
+
+hugger = re.compile(r'<<([\w\.-]+)>>')
+
+class InFileWriter:
+ """Helper class that reads *infile*, and writes it to *outfile*
+ with string interpolation values obtained from the task's repl mapping.
+
+ outfile is created with permissions *mode* (default 0755).
+ """
+ def __init__(self, infile, outfile, mode=0755):
+ self.infile = infile
+ self.outfile = outfile
+ self.mode = mode
+
+ def execute(self, task):
+ infilename = task.interpolate(self.infile)
+ outfilename = task.interpolate(self.outfile)
+ text = open(infilename).read()
+ needs_replacement = hugger.findall(text)
+ # XXX fix me to not combine global and local lookups and not ignore
+ # case
+ for item in needs_replacement:
+ lower = item.lower()
+ try:
+ # try local
+ rvalue = task.interpolate('${./%s}' % lower)
+ except MissingDependencyError:
+ # try global; if this fails, fail.
+ rvalue = task.interpolate('${%s}' % lower)
+ text = text.replace('<<%s>>' % item, rvalue)
+
+ outfile = open(outfilename, 'w')
+ outfile.write(text)
+ outfile.close()
+
+ os.chmod(outfilename, self.mode)
+
+ def represent(self, task):
+ return '<InFileWriter instance for %s>' % task.interpolate(self.outfile)
+
+ def check(self, task):
+ infilename = task.interpolate(self.infile)
+ if not os.path.exists(infilename):
+ raise ValueError('%r does not exist to read as input '
+ 'file' % infilename)
+ return task.interpolate(self.outfile)
+
+class Substitute:
+ def __init__(self, filename, search_re, replacement_string,
+ backup_ext='.~subst'):
+ self.filename = filename
+ self.search_re = search_re
+ self.replacement_string = replacement_string
+ self.backup_ext = backup_ext
+
+ def execute(self, task):
+ search_re = task.interpolate(self.search_re)
+ repl = task.interpolate(self.replacement_string)
+ filename = task.interpolate(self.filename)
+ backup_ext = task.interpolate(self.backup_ext)
+
+ try:
+ search = re.compile(search_re, re.MULTILINE)
+ except:
+ print 'search_re %s could not be compiled' % search_re
+ raise
+
+ bakfilename = '%s%s' % (filename, backup_ext)
+ shutil.copy(filename, bakfilename)
+
+ text = open(filename).read()
+ text = repl.join(search.split(text))
+
+ newfilename = '%s%s' % (filename, '.~new')
+ newfile = open(newfilename, 'w')
+ newfile.write(text)
+ shutil.copymode(filename, newfilename)
+ shutil.move(newfilename, filename)
+
+ def represent(self, task):
+ repl = task.interpolate(self.replacement_string)
+ filename = task.interpolate(self.filename)
+ return 'Substituter for %s in %s' % (repl, filename)
+
+ check = represent
+
+OMIT_DIRS = [os.path.normcase("CVS"), os.path.normcase('.svn')]
+
+class SkelCopier:
+
+ """ Makes an exact copy of one directory to another. If a file
+ in the source directory has a .in extension, replace its values
+ with task interpolated values, and write it to the target directory
+ without the .in extension """
+
+ def __init__(self, skeldir, tgtdir, destructive=''):
+ self.skeldir = skeldir
+ self.tgtdir = tgtdir
+ self.destructive = destructive
+
+ def represent(self, task):
+ stuff = task.interpolate_kw(**self.__dict__)
+ return 'SkelCopier from %(skeldir)s to %(tgtdir)s' % stuff
+
+ check = represent
+
+ def execute(self, task):
+ skeldir = task.interpolate(self.skeldir)
+ tgtdir = task.interpolate(self.tgtdir)
+
+ if not os.path.exists(tgtdir):
+ os.makedirs(tgtdir)
+
+ for dirpath, dirnames, filenames in os.walk(skeldir, topdown=True):
+ rest = dirpath[len(skeldir)+1:]
+ tgtpath = os.path.join(tgtdir, rest)
+
+ # We can manipulate the list of directories directly to
+ # prevent copying as well as further recursion into selected
+ # folders.
+ remove_dirs = [x for x in dirnames if x in OMIT_DIRS]
+ for to_remove in remove_dirs:
+ dirnames.remove(to_remove)
+
+ for filename in filenames:
+ # Copy the file:
+ src = os.path.join(dirpath, filename)
+ sn, ext = os.path.splitext(filename)
+ if os.path.normcase(ext) == ".in":
+ dst = os.path.join(tgtpath, sn)
+ else:
+ dst = os.path.join(tgtpath, filename)
+
+ if os.path.exists(dst) and not self.destructive:
+ continue
+
+ if os.path.normcase(ext) == ".in":
+ infilecopier = InFileWriter(src, dst)
+ infilecopier.execute(task)
+ else:
+ shutil.copyfile(src, dst)
+
+ shutil.copymode(src, dst)
+
+ for dirname in dirnames:
+ # make the directories
+ dn = os.path.join(tgtpath, dirname)
+ if not os.path.exists(dn):
+ os.mkdir(dn)
+ shutil.copymode(os.path.join(dirpath, dirname), dn)
+
282 context.py
@@ -0,0 +1,282 @@
+import re
+import os
+import pwd
+import sys
+import logging
+
+import parser
+import resolver
+
+try:
+ from distutils.util import get_platform
+ platform = get_platform()
+except ImportError:
+ platform = None
+
+def select_one(filename, section_name):
+ sections = parser.parse(filename)
+ section = sections.get(section_name)
+ if section is None:
+ raise ValueError(
+ 'No such section %r in file %r' % (section_name, filename)
+ )
+ return section
+
+def select(selections):
+ raw = {}
+ for ns_name, filename, section_name in selections:
+ section = select_one(filename, section_name)
+ raw[ns_name] = section
+ return raw
+
+def resolve_file_section(filename, section_name, defaults=None):
+ if defaults is None:
+ defaults = {}
+
+ namespace = select_one(filename, section_name)
+ namespaces = {'whatever':namespace}
+ sections = resolver.resolve(namespaces, defaults)
+ return sections['whatever']
+
+DEFAULT_NAMESPACE = 'default_namespace'
+VERSIONFINDER = re.compile(r'\[([\w\.-]+)\]$').search
+
+class Context:
+
+ _resolved = None
+
+ def __init__(self, inifile, buildoutdir=None, logger=None,
+ namespace_overrides=None):
+
+ if buildoutdir is None:
+ # assume that the caller is the main makefile
+ caller_globals = sys._getframe(1).f_globals
+ modulename = caller_globals.get('__file__', sys.argv[0])
+ buildoutdir = os.path.abspath(os.path.dirname(modulename))
+
+ builtins = {
+ 'cwd':os.path.abspath(os.getcwd()),
+ 'userhome':os.path.expanduser('~'),
+ 'username':pwd.getpwuid(os.getuid())[0],
+ 'buildoutdir':buildoutdir,
+ 'platform':platform
+ }
+
+ self.inifile = inifile
+ self.globals = builtins
+ self.namespace_selections = []
+
+ if namespace_overrides is None:
+ namespace_overrides = {}
+ self.namespace_overrides = namespace_overrides
+
+ if logger is None:
+ logger = logging.getLogger()
+ logger.handlers[:] = []
+ handler = logging.StreamHandler()
+ formatter = logging.Formatter('%(message)s')
+ handler.setFormatter(formatter)
+ logger.addHandler(handler)
+ self.logger = logger
+
+ try:
+ fileglobals = resolve_file_section(inifile, 'globals', builtins)
+ self.globals.update(fileglobals)
+ except ValueError:
+ # tolerate missing [globals] section
+ self.warn('No [globals] section in inifile %s ' % inifile)
+
+ try:
+ namespaces = resolve_file_section(inifile, 'namespaces', builtins)
+ except ValueError:
+ # tolerate missing [namespaces] section
+ self.warn('No [namespaces] section in inifile %s ' % inifile)
+ namespaces = {}
+
+ for ns_name, value in namespaces.items():
+ mo = VERSIONFINDER(value)
+ if mo is None:
+ section_name = DEFAULT_NAMESPACE
+ filename = value.strip()
+ else:
+ section_name = mo.groups()[0]
+ begin, end = mo.span()
+ filename = value[0:begin].strip()
+
+ self.namespace_selections.append((ns_name, filename, section_name))
+
+ def set_section(self, ns_name, new_section_name):
+ for name, file, section_name in self.namespace_selections[:]:
+ if ns_name == name:
+ self.namespace_selections.append((ns_name, file,
+ new_section_name))
+ return
+ raise ValueError('No such namespace to override: %s' % ns_name)
+
+ def set_file(self, ns_name, filename, section_name):
+ self.namespace_selections.append((ns_name, filename, section_name))
+
+ def set_override(self, ns_name, k, v):
+ self._resolved = None
+ override_dict = self.namespace_overrides.setdefault(ns_name, {})
+ override_dict[k] = v
+
+ def resolve(self):
+ if self._resolved is None:
+ selections = select(self.namespace_selections)
+ for ns_name, D in selections.items():
+ overrides = self.namespace_overrides.get(ns_name, {})
+ D.update(overrides)
+ self._resolved = resolver.resolve(selections, self.globals)
+ return self._resolved
+
+ def interpolate(self, s, default_ns, task=None, overrides=None):
+ resolved = self.resolve()
+ if overrides is None:
+ overrides = {}
+ # XXX i wish we didn't need to make a copy here
+ overrides = overrides.copy()
+ overrides.update(self.globals)
+ result = resolver.resolve_value(s, default_ns, resolved, overrides,
+ task)
+ return result
+
+ def warn(self, msg):
+ self.logger.warn(msg)
+
+ def debug(self, msg):
+ self.logger.debug(msg)
+
+ def info(self, msg):
+ self.logger.info(msg)
+
+ def check(self, task):
+ """ Perform a preflight sanity check """
+ self.warn('buildit: context.check() starting with root task named '
+ '"%s"' % task.getName())
+ for task in postorder(task, self):
+ # these will raise errors if there are interpolation problems
+ task.getWorkDir()
+ for command in task.getCommands():
+ command.check(task)
+ task.getTargets()
+ if task.condition:
+ task.condition(task)
+
+ return True
+
+ def install(self):
+ filename = self.inifile
+ sections = parser.parse(filename)
+ tasks = []
+ for sname, section in sections.items():
+ if sname.endswith(':instance'):
+ instancename = sname[:-9]
+ task = section.get('buildit_task')
+ if task is None:
+ raise ValueError(
+ 'buildit: no buildit_task for instance %r' % sname)
+ order = section.get('buildit_order', None)
+ if order is None:
+ self.warn('buildit: WARNING instance %r has no '
+ 'buildit_order' % sname)
+ mo = VERSIONFINDER(task)
+ if mo is not None:
+ ns_name = mo.groups()[0]
+ begin, end = mo.span()
+ dottedname = task[0:begin].strip()
+ task = importable_name(dottedname)
+ else:
+ dottedname = task
+ task = importable_name(dottedname)
+ ns_name = task.namespaces[0]
+ overrides = []
+ for name, value in section.items():
+ if name != 'buildit_task':
+ overrides.append((ns_name, name, value))
+ tasks.append((order, instancename, task, overrides))
+ tasks.sort()
+ for order, instancename, task, overrides in tasks:
+ for ns_name, name, value in overrides:
+ self.set_override(ns_name, name, value)
+ self.warn('buildit: running instance definition %r' % instancename)
+ if self.check(task):
+ self.run(task)
+
+ def run(self, task):
+ """ Run the task and all of its dependents in dependency order """
+ self.warn('buildit: context.run() starting with root task named "%s"' %
+ task.getName())
+ for task in postorder(task, self):
+ if task.hasCommands() and task.needsCompletion():
+ task.attemptCompletion()
+ self.warn('buildit: done with task')
+
+class Software:
+ def __init__(self, task, context):
+ self.task = task
+ self.context = context
+ self.overrides = {}
+
+ def set(self, k, v, ns_name=None):
+ if ns_name is None:
+ ns_name = self.task.namespace
+ self.overrides[(ns_name, k)] = v
+
+ def install(self):
+ for (ns_name, k), v in self.overrides.items():
+ self.context.set_override(ns_name, k, v)
+ if self.context.check(self.task):
+ self.context.run(self.task)
+
+def postorder(startnode, context):
+ """ Postorder depth-first traversal of the dependency graph implied
+ by startnode and its children; set context along the way """
+ seen = {}
+
+ def visit(node):
+ namespace = node.namespace
+ seen[node] = True
+ for child in node.dependencies:
+ if not isinstance(child, node.__class__):
+ raise TypeError("Can't use non-Task %r as a dependency"
+ % child)
+ if child not in seen:
+ for result in visit(child):
+ for namespace in result.getNamespaces():
+ result.setNamespace(namespace)
+ yield result
+
+ node.setContext(context)
+ yield node
+
+ startnode.setContext(context)
+ for namespace in startnode.getNamespaces():
+ startnode.setNamespace(namespace)
+ for node in visit(startnode):
+ yield node
+
+# A datatype that converts a Python dotted-path-name to an object
+
+def importable_name(name):
+ try:
+ components = name.split('.')
+ start = components[0]
+ g = globals()
+ package = __import__(start, g, g)
+ modulenames = [start]
+ for component in components[1:]:
+ modulenames.append(component)
+ try:
+ package = getattr(package, component)
+ except AttributeError:
+ n = '.'.join(modulenames)
+ package = __import__(n, g, g, component)
+ return package
+ except ImportError:
+ import traceback, cStringIO
+ IO = cStringIO.StringIO()
+ traceback.print_exc(file=IO)
+ raise ValueError(
+ 'The object named by %r could not be imported\n%s' % (
+ name, IO.getvalue()))
142 parser.py
@@ -0,0 +1,142 @@
+import re
+import ConfigParser
+
+class BuilditConfigParser(ConfigParser.RawConfigParser):
+
+ # we override the OPTCRE expression below to only consider '=' a
+ # separator (not = and :).
+
+ OPTCRE = re.compile(
+ r'(?P<option>[^=\s][^=]*)'
+ r'\s*(?P<vi>[=])\s*' # any number of space/tab,
+ # followed by = separator
+ # followed by any # space/tab
+ r'(?P<value>.*)$' # everything up to eol
+ )
+
+ def optionxform(self, v):
+ # do not munge case as RawConfigParser does.
+ return v
+
+ def _read(self, fp, fpname):
+ """Parse a sectioned setup file.
+
+ The sections in setup file contains a title line at the top,
+ indicated by a name in square brackets (`[]'), plus key/value
+ options lines, indicated by `name: value' format lines.
+ Continuations are represented by an embedded newline then
+ leading whitespace. Blank lines, lines beginning with a '#',
+ and just about everything else are ignored.
+ """
+ cursect = None # None, or a dictionary
+ curxformer = None
+ optname = None
+ lineno = 0
+ e = None # None, or an exception
+ while True:
+ line = fp.readline()
+ if not line:
+ break
+ lineno = lineno + 1
+ # comment or blank line?
+ if line.strip() == '' or line[0] in '#;':
+ continue
+ # continuation line?
+ if line[0].isspace() and cursect is not None and optname:
+ value = line.strip()
+ if value:
+ old = cursect[optname]
+ if curxformer:
+ value = extenders[curxformer](old, value)
+ else:
+ cursect[optname] = "%s\n%s" % (old, value)
+ # a section header or option header?
+ else:
+ # is it a section header?
+ mo = self.SECTCRE.match(line)
+ if mo:
+ sectname = mo.group('header')
+ if sectname in self._sections:
+ cursect = self._sections[sectname]
+ elif sectname == ConfigParser.DEFAULTSECT:
+ cursect = self._defaults
+ else:
+ cursect = {'__name__': sectname}
+ self._sections[sectname] = cursect
+ # So sections can't start with a continuation line
+ optname = None
+ # no section header in the file?
+ elif cursect is None:
+ raise ConfigParser.MissingSectionHeaderError(fpname,
+ lineno, line)
+ # an option line?
+ else:
+ mo = self.OPTCRE.match(line)
+ if mo:
+ optname, vi, optval = mo.group('option', 'vi', 'value')
+ if vi == '=' and ';' in optval:
+ # ';' is a comment delimiter only if it follows
+ # a spacing character
+ pos = optval.find(';')
+ if pos != -1 and optval[pos-1].isspace():
+ optval = optval[:pos]
+ optval = optval.strip()
+ # allow empty values
+ if optval == '""':
+ optval = ''
+ optname = optname.rstrip()
+ optname, optval, curxformer = typexform(optname, optval,
+ fpname, lineno,
+ line)
+ cursect[optname] = optval
+ else:
+ # a non-fatal parsing error occurred. set up the
+ # exception but keep going. the exception will be
+ # raised at the end of the file and will contain a
+ # list of all bogus lines
+ if not e:
+ e = ConfigParser.ParsingError(fpname)
+ e.append(lineno, repr(line))
+ # if any parsing errors occurred, raise an exception
+ if e:
+ raise e
+
+def xformtokens(s):
+ return [x.strip() for x in s.split()]
+
+def extendtokens(old, new):
+ new = xformtokens(new)
+ old.extend(new)
+ return old
+
+xforms = {'tokens':xformtokens}
+extenders = {'tokens':extendtokens}
+
+def typexform(optname, optval, fpname, lineno, line):
+ if not ':' in optname:
+ return optname, optval, None
+ optname, xformername = optname.split(':', 1)
+ xformer = xforms.get(xformername)
+ if xformer is None:
+ e = ConfigParser.ParsingError(fpname)
+ e.append(lineno, repr(line))
+ raise e
+ return optname, xformer(optval), xformername
+
+def parse(filename, defaults=None):
+ sections = {}
+
+ if defaults is None:
+ defaults = {}
+ if hasattr(filename, 'readline'):
+ fp = filename
+ else:
+ fp = open(filename, 'r')
+
+ parser = BuilditConfigParser(defaults)
+ parser.readfp(fp)
+
+ for section in parser.sections():
+ sections[section] = dict(parser.items(section))
+
+ return sections
288 resolver.py
@@ -0,0 +1,288 @@
+import re
+
+# "bracketed" names are meant to be any of:
+# ${foo} (name foo available in defaults)
+# ${./foo} (foo defined within the section in which the value lives)
+# ${foo/bar} (bar available within the foo namespace)
+# ${foo/bar/baz} (bar/baz available within the foo namespace)
+
+# the TOKEN_RE below will also find, for example
+# ${/foo/bar}
+# ${foo/}
+# even though the meaning of these names is undefined
+
+TOKEN_RE = re.compile(r'\$\{([\.\w/-]+)\}')
+DEFAULT = ()
+LOCAL = '.'
+
+class DependencyError(ValueError):
+ pass
+
+class CyclicDependencyError(DependencyError):
+ def __str__(self):
+ L = []
+ cycles = self.args[0]
+ for cycle in cycles:
+ section, dependent = cycle
+ dependees = cycles[cycle]
+ L.append('In section %r, option %r depends on options %r' % (
+ section, dependent, dependees)
+ )
+ msg = '; '.join(L)
+ return '<CyclicDependencyError: %s>' % msg
+
+class MissingDependencyError(DependencyError):
+ def __init__(self, section_name, option_name, value, offender):
+ self.args = [(section_name, option_name, value, offender)]
+ self.section_name = section_name
+ self.option_name = option_name
+ self.value = value
+ self.offender = offender
+
+ def __str__(self):
+ return (
+ '<MissingDependencyError in section named %r, option named '
+ '%r, value %r, offender %r>' % (
+ self.section_name, self.option_name, self.value, self.offender)
+ )
+
+def resolve(sections, defaults=None, overrides=None):
+ """
+ Resolve the replacement values in the dict of unresolved values
+ named 'sections'.
+ """
+ if defaults is None:
+ defaults = {}
+ if overrides is None:
+ overrides = {}
+
+ missing = []
+ sections_copy = sections.copy()
+
+ # overrides is a dict of section names to override dictionaries
+ # XXX this really should feel better
+ for secname, odict in overrides.items():
+ if sections_copy.has_key(secname):
+ for k, v in odict.items():
+ sections_copy[secname][k] = v
+
+ # we need to process the sections in a resolveable order based on the
+ # replacement values they contain, relative to one another.
+ ordered_sections = section_resolution_order(sections_copy)
+
+ for section_name in ordered_sections:
+ # mutate the copy as we resolve values; each successive call to
+ # resolve_options will be using the most "fully resolved" version of the
+ # copy of the sections dictionary possible. Because the sections
+ # are processed in order, transitive replacements should work
+ # properly.
+ sections_copy[section_name] = resolve_options(
+ section_name, sections_copy, defaults)
+
+ return sections_copy
+
+def section_resolution_order(sections):
+ order = []
+ items = []
+
+ for section_name in sections.keys():
+ items.append(section_name)
+ options = sections[section_name]
+ for k, v in options.items():
+ try:
+ tokens = TOKEN_RE.findall(v)
+ except TypeError:
+ raise TypeError('value %r must be a string' % v)
+ for token in tokens:
+ namespace, name = interpret_substitution_token(token)
+ if namespace not in (DEFAULT, LOCAL):
+ order.append((namespace, section_name))
+
+ return topological_sort(items, order)
+
+def resolve_options(section_name, sections, defaults=None, overrides=None):
+ """
+ Given a section_name and a sections dict for which
+ sections[section_name] contains one or more 'options' dictionaries
+ with string keys and values referencing those keys,
+ e.g. ${./localname}' and ${globalname} and ${external/name},
+ resolve the values and return a 'resolved' copy of the options
+ dictionary referred to by 'section_name'. Use the 'defaults'
+ dictionary passed in to resolve 'default' names, use the local
+ section to resolve 'local' names, and use the 'sections'
+ dictionary passed in to resolve 'external' names. This function
+ will only expand things properly for external names if the
+ dependents of 'section_name' have already been resolved within
+ 'sections'.
+ """
+ if defaults is None:
+ defaults = {}
+ if overrides is None:
+ overrides = {}
+
+ options = sections[section_name]
+ options_copy = options.copy()
+ sections_copy = sections.copy()
+ options_copy.update(overrides)
+ sections_copy[section_name] = options_copy
+
+ missing = []
+
+ # we need to process the option values in a resolveable order based on the
+ # replacement values they contain, relative to one another.
+ options = sections_copy[section_name]
+ index, ordered_keys = option_resolution_order(options_copy, section_name)
+
+ for option_name in ordered_keys:
+ # mutate the copy as we resolve values; each successive call to
+ # resolve_option will be using the most "fully resolved" version of the
+ # copy of the options dictionary possible. This is how transitive
+ # replacements work.
+ v = options_copy.get(option_name)
+ if v is None:
+ # the user may have supplied a missing option name
+ offenders = index[option_name]
+ value, offender = offenders[0]
+ raise MissingDependencyError(section_name, option_name, offender,
+ value)
+ else:
+ options_copy[option_name] = resolve_value(
+ v, section_name, sections_copy, defaults, option_name
+ )
+
+ sections_copy[section_name] = options_copy
+
+ return options_copy
+
+def option_resolution_order(options, section_name=None):
+ order = []
+ items = []
+ index = {}
+
+ for k, v in options.items():
+ items.append(k)
+ try:
+ tokens = TOKEN_RE.findall(v)
+ except TypeError:
+ raise TypeError('value %r must be a string' % v)
+ for token in tokens:
+ namespace, name = interpret_substitution_token(token)
+ # we don't resolve external references here, and global references
+ # aren't required to be resolved in any particular order because
+ # they're by nature fully qualified, so we just need to deal with
+ # local values when determining a resolution order
+ if namespace == LOCAL:
+ order.append((name, k))
+ L = index.setdefault(name, [])
+ L.append((k, v))
+
+ return index, topological_sort(items, order, section_name=section_name)
+
+def resolve_value(value, section_name, sections, defaults=None,
+ option_name=None):
+ if defaults is None:
+ defaults = {}
+ options = sections.get(section_name)
+
+ try:
+ tokens = TOKEN_RE.findall(value)
+ except TypeError:
+ raise TypeError('value %r must be a string' % value)
+
+ missing = []
+ orig_value = value
+
+ for token in tokens:
+ namespace, name = interpret_substitution_token(token)
+ if namespace == DEFAULT:
+ lookup = defaults
+ elif namespace == LOCAL:
+ if options is None:
+ raise MissingDependencyError(section_name, option_name,
+ orig_value, name)
+ lookup = options
+ else:
+ lookup = sections.get(namespace, {})
+
+ substring = '${%s}' % token
+ option = lookup.get(name)
+ if option is None:
+ raise MissingDependencyError(section_name, option_name, orig_value,
+ name)
+ value = value.replace(substring, lookup[name])
+
+ return value
+
+def interpret_substitution_token(token):
+ tup = token.split('/', 1)
+ if len(tup) == 1:
+ return DEFAULT, token
+ else:
+ return tup # namespace, name
+
+def topological_sort(items, partial_order, section_name=None,
+ ignore_missing_partials=True):
+
+ """ Stolen from http://www.bitinformation.com/art/python_topsort.html
+
+ Given the example list of items ['item2', 'item3', 'item1',
+ 'item4'] and a 'partial order' list in the form [(item1, item2),
+ (item2, item3)], where the example tuples indicate that 'item1'
+ should precede 'item2' and 'item2' should precede 'item3', return
+ the sorted list of items ['item1', 'item2', 'item3', 'item4'].
+ Note that since 'item4' is not mentioned in the partial ordering
+ list, it will be at an arbitrary position in the returned list.
+
+ """
+
+ def add_node(graph, node):
+ if not graph.has_key(node):
+ graph[node] = [0] # 0 = number of arcs coming into this node
+
+ def add_arc(graph, fromnode, tonode):
+ graph[fromnode].append(tonode)
+ graph[tonode][0] = graph[tonode][0] + 1
+
+ graph = {}
+
+ for v in items:
+ add_node(graph, v)
+
+ for a, b in partial_order:
+ if ignore_missing_partials:
+ # don't fail if a value is present in the partial_order
+ # list but missing in items. In this mode, we fake up a
+ # value instead of raising a KeyError when trying to use
+ # add_arc in order to be able to produce errror reports
+ # that aggregate both local and default error conditions
+ # in callers.
+ if not graph.has_key(a):
+ graph[a] = [0]
+ elif not graph.has_key(b):
+ graph[b] = [0]
+ else:
+ add_arc(graph, a, b)
+ else:
+ add_arc(graph, a, b)
+
+ roots = [ node for (node, nodeinfo) in graph.items() if nodeinfo[0] == 0 ]
+
+ sorted = []
+
+ while len(roots) != 0:
+ root = roots.pop()
+ sorted.append(root)
+ for child in graph[root][1:]:
+ graph[child][0] = graph[child][0] - 1
+ if graph[child][0] == 0:
+ roots.append(child)
+ del graph[root]
+
+ if len(graph) != 0:
+ # loop in input
+ cycledeps = {}
+ for k, v in graph.items():
+ cycledeps[(section_name, k)] = v[1:]
+ raise CyclicDependencyError(cycledeps)
+
+ return sorted
13 setup.py
@@ -0,0 +1,13 @@
+#!/usr/bin/env python
+
+from distutils.core import setup
+
+setup (name = 'buildit',
+ url = 'http://agendaless.com/Members/chrism/software/buildit',
+ version = '1.1',
+ description = 'buildit parsing and conditional execution system',
+ author = "Chris McDonough",
+ author_email="chrism@agendaless.com",
+ packages = ['buildit', ],
+ package_dir = {'buildit':'.'},
+ )
347 task.py
@@ -0,0 +1,347 @@
+import UserDict
+import os
+import warnings
+import sys
+import resolver
+
+class TaskError(ValueError):
+ pass
+
+class TargetNotCreatedError(TaskError):
+ pass
+
+VERBOSE = 0
+STRICT_TARGET_CHECKING = 1
+MAKEFILE_TIMESTAMP_CHECKING = 0
+DEPENDENCY_TARGET_TIMESTAMP_CHECKING = 0
+FALSE_CONDITION = ()
+FALSE_TARGETS = (FALSE_CONDITION,)
+
+class Task:
+
+ namespace = ''
+ context = None
+
+ def __init__(self, name, namespaces=(), targets=(), dependencies=(),
+ commands=(), workdir=None, condition=None, mglobals=None):
+ self.name = name
+ self.namespaces = sequence_helper(namespaces)
+ self.targets = sequence_helper(targets)
+ self.dependencies = sequence_helper(dependencies)
+ self.commands = sequence_helper(commands)
+ self.workdir = workdir
+ self.condition = condition
+
+ if mglobals is None:
+ # Get a hold of caller's globals around so we can figure out
+ # whether the source file has changed or not. This is sneaky
+ # but it's more convenient than needing to explicitly pass in
+ # globals to each task's constructor.
+ caller_globals = sys._getframe(1).f_globals
+ else:
+ caller_globals = mglobals
+
+ makefile = caller_globals.get('__file__', sys.argv[0])
+ if makefile.endswith('pyc') or makefile.endswith('.pyo'):
+ makefile = makefile[:-1]
+
+ self.makefile = os.path.abspath(makefile)
+ self.builtins = {'makefile':self.makefile,
+ 'makefiledir':os.path.dirname(self.makefile),}
+
+ # getters
+
+ def getMakefile(self):
+ return self.makefile
+
+ def getWorkDir(self):
+ if self.workdir:
+ return self.interpolate(self.workdir)
+ return None
+
+ def getTargets(self):
+ targets = []
+ if self.condition is not None:
+ if not self.condition(self):
+ return FALSE_TARGETS
+
+ if self.targets:
+ for target in self.targets:
+ target = self.interpolate(target)
+ workdir = self.getWorkDir()
+ if target and not os.path.isabs(target):
+ if workdir:
+ target = os.path.join(workdir, target)
+ else:
+ self.context.warn(
+ 'Relative target %r specified without a working '
+ 'directory' % target
+ )
+ targets.append(target)
+ return targets
+
+ def getName(self):
+ return self.name
+
+ def getNamespaces(self):
+ L = []
+ for namespace in self.namespaces:
+ val = self.interpolate(namespace)
+ if not hasattr(val, '__iter__'):
+ # allow either lists or space-separated tokens in a single
+ # string
+ val = val.strip().split()
+ L.extend(val)
+ if not L:
+ # a task without any namespaces has a single namespace, None
+ L = [None]
+ return L
+
+ def setNamespace(self, namespace):
+ self.namespace = namespace
+
+ # helpers
+
+ def __repr__(self):
+ return '<%s %s named "%s">' % (self.__class__, id(self), self.name)
+
+ def interpolate_kw(self, **kw):
+ result = {}
+ for k, v in kw.items():
+ result[k] = self.interpolate(v)
+ return result
+
+ def interpolate(self, value):
+ if self.context is None:
+ raise TaskError('Tasks require a context to perform interpolation')
+ return self.context.interpolate(value, self.namespace, self,
+ self.builtins)
+
+ def output(self, s):
+ if self.context is None:
+ raise TaskError('Tasks require a context to perform logging')
+ self.context.warn('buildit: ' + str(s))
+
+ def setContext(self, context):
+ self.context = context
+
+ def getContext(self):
+ return self.context
+
+ # meat
+
+ def needsCompletion(self):
+
+ """ If our target doesnt exist or any of our dependency's
+ targets are newer than our target, return True, else return
+ False """
+
+ name = self.getName()
+
+ if not self.targets:
+ # if we don't have a target, we always need completion
+ if VERBOSE:
+ self.output('"%s" has a null set of targets' % name)
+ return 1
+
+ if self.condition is not None:
+
+ if not self.condition(self):
+ self.output('"%s" had a condition which prevents this '
+ 'target from needing completion' % name)
+ return 0
+
+ missing = self.missingTargets()
+ if missing:
+ errormsg = []
+ for target in missing:
+ # if one of our targets doesnt exist we definitely need
+ # completion
+ errormsg.append('"%s" is missing target %s' % (name, target))
+ self.output('\n'.join(errormsg))
+ return 1
+
+ # otherwise, we *might* need to recomplete if one of our
+ # dependencies' target's timestamps is newer than our target's
+ # timestamp or our makefile is newer than any of our targets
+ if DEPENDENCY_TARGET_TIMESTAMP_CHECKING:
+ if self.dependencyTargetIsNewer():
+ self.output('"%s" has a dependency with a newer target' %
+ name)
+ return 1
+ if MAKEFILE_TIMESTAMP_CHECKING:
+ olders = self.targetsOlderThanMakefile()
+ if olders:
+ errormsg = []
+ for older in olders:
+ errormsg.append(
+ 'makefile "%s" is newer than target "%s" of task "%s"'
+ % (self.makefile, older, self.getName())
+ )
+ self.output('\n'.join(errormsg))
+ return 1
+ return 0
+
+ def targetsOlderThanMakefile(self):
+
+ """ If the makefile in which the task instance has been defined
+ is newer than our target, return true """
+
+ name = self.getName()
+ targets = self.getTargets()
+ makefile = self.getMakefile()
+ olders = []
+ for target in targets:
+ if os.path.getmtime(makefile) > os.path.getmtime(target):
+ olders.append(target)
+ return olders
+
+ def dependencyTargetIsNewer(self):
+
+ """ If any of our dependencies' targets are newer than our
+ target, return True, else return False """
+
+ targets = self.getTargets()
+ name = self.getName()
+
+ for dep in self.dependencies:
+ deptargets = dep.getTargets()
+ depname = dep.getName()
+
+ if not deptargets:
+ # dependencies with no target are always newer, shortcut
+ if VERBOSE:
+ self.output('%s: dependency "%s" has a null target '
+ 'set so is '
+ 'newer'% (name, depname)
+ )
+ return 1
+
+ missingdeptarget = None
+ for deptarget in deptargets:
+ if deptarget is not FALSE_CONDITION:
+ if not os.path.exists(deptarget):
+ # nonexistent dependency target, it will definitely need
+ # completion
+ self.output('%s: "%s" missing dependency target '
+ '%s' %
+ (name, depname, deptarget)
+ )
+ missingdeptarget = 1
+
+ if missingdeptarget:
+ return 1
+
+ newerdeptarget = None
+ for deptarget in deptargets:
+ for target in targets:
+ if (target is not FALSE_CONDITION and
+ deptarget is not FALSE_CONDITION):
+ if os.path.getmtime(deptarget) > os.path.getmtime(
+ target):
+ self.output('%s: dependency "%s" has a newer '
+ 'tgt "%s"' %
+ (name, depname, deptarget)
+ )
+ newerdeptarget = 1
+ if newerdeptarget:
+ return 1
+
+ return 0
+
+ def getCommands(self):
+ return command_helper(self.commands)
+
+ def attemptCompletion(self):
+
+ """ Do the work implied by the task (presumably create the
+ target file)"""
+
+ name = self.getName()
+
+ old_workdir = os.getcwd()
+ try:
+ self.output('executing %s' % name)
+ workdir = self.getWorkDir()
+ if workdir:
+ os.chdir(workdir)
+ if VERBOSE:
+ self.output('changed working directory to %s' %
+ workdir)
+ for command in self.getCommands():
+ self.output('running "%s"' % command.represent(self))
+ status = command.execute(self)
+ if status:
+ raise TaskError, (
+ 'Task "%s": command "%s" failed with status code '
+ '"%s"' % (name, command, status)
+ )
+ finally:
+ os.chdir(old_workdir)
+ if VERBOSE:
+ self.output('reset working directory to %s' % old_workdir)
+ if not self.getTargets():
+ # it's completed if we don't have a target
+ return 1
+ if STRICT_TARGET_CHECKING:
+ missing_targets = self.missingTargets()
+ if missing_targets:
+ errormsg = []
+ for missing in missing_targets:
+ errormsg.append('target of %s (%s) was not created and '
+ 'STRICT_TARGET_CHECKING is turned on' % (name,
+ missing)
+ )
+ raise TargetNotCreatedError('\n'.join(errormsg))
+ for target in self.getTargets():
+ if target is not FALSE_CONDITION:
+ # touch the target file
+ os.utime(target, None)
+
+ return not self.needsCompletion()
+
+ def missingTargets(self):
+
+ """ Check for the existence of our target files """
+
+ targets = self.getTargets()
+ missing = []
+
+ for target in targets:
+ exists = os.path.exists(target)
+ if not exists:
+ missing.append(target)
+
+ return missing
+
+ def hasCommands(self):
+ return not not self.commands
+
+def sequence_helper(val):
+ if not isinstance(val, (list, tuple)):
+ val = [val]
+ return list(val)
+
+class ShellCommand:
+ def __init__(self, shell_command):
+ self.shell_command = shell_command
+
+ def represent(self, task):
+ return task.interpolate(self.shell_command)
+
+ def execute(self, task):
+ return os.system(self.represent(task))
+
+ def check(self, task):
+ return self.represent(task)
+
+def command_helper(seq):
+ L = []
+ for item in seq:
+ if isinstance(item, basestring):
+ # if the item is a string, it's a shell command
+ item = ShellCommand(item)
+ # otherwise its assumed to be a callable that has represent, execute,
+ # and check methods
+ L.append(item)
+ return L
999 tests.py
@@ -0,0 +1,999 @@
+import os
+import tempfile
+import unittest
+import shutil
+
+class ResolverTests(unittest.TestCase):
+
+ def test_resolve_options_localonly(self):
+ options = {'name1':'hello ${./name2}',
+ 'name2':'${./name3} goodbye',
+ 'name3':'this is name3',
+ 'name4':'this is name4',
+ 'another':'another',
+ 'repeated':'${./another} ${./another} ${./name3}'}
+ from resolver import resolve_options
+ new = resolve_options('foo', {'foo':options})
+ self.assertEqual(new, {'name1':'hello this is name3 goodbye',
+ 'name2':'this is name3 goodbye',
+ 'name3':'this is name3',
+ 'name4':'this is name4',
+ 'repeated':'another another this is name3',
+ 'another':'another'})
+
+ def test_resolve_options_directloop(self):
+ options = {'name1':'${./name2}',
+ 'name2':'${./name1}',
+ 'name3':'this is name3',
+ 'name4':'this is name4',
+ 'another':'another'}
+ from resolver import resolve_options
+ from resolver import CyclicDependencyError
+
+ try:
+ resolve_options('foo', {'foo':options})
+ except CyclicDependencyError, why:
+ self.assertEqual(why[0],
+ {
+ ('foo', 'name2'): ['name1'],
+ ('foo', 'name1'): ['name2'],
+ })
+ self.assertEqual(str(why),
+ ("<CyclicDependencyError: In section 'foo', option 'name1' "
+ "depends on options ['name2']; In section 'foo', option "
+ "'name2' depends on options ['name1']>"))
+ else:
+ raise AssertionError('did not raise')
+
+ def test_resolve_options_indirectloop(self):
+ options = {'name1':'${./name2}',
+ 'name2':'${./name3}',
+ 'name3':'${./name1}this is name3',
+ 'name4':'this is name4',
+ 'another':'another'}
+ from resolver import resolve_options
+ from resolver import CyclicDependencyError
+
+ try:
+ resolve_options('foo', {'foo':options})
+ except CyclicDependencyError, why:
+ self.assertEqual(why[0],
+ {
+ ('foo', 'name2'): ['name1'],
+ ('foo', 'name3'): ['name2'],
+ ('foo', 'name1'): ['name3']
+ })
+ else:
+ raise AssertionError('did not raise')
+
+ def test_resolve_options_defaults(self):
+ options = {'name1':'hello ${./name2}',
+ 'name2':'${./name3} goodbye ${default1}',
+ 'name3':'this is name3 ${default2}',
+ 'name4':'this is name4',
+ 'another':'another'}
+ from resolver import resolve_options
+ new = resolve_options('foo', {'foo':options},
+ defaults={'default1':'singsong',
+ 'default2':'JAMMA'})
+ self.assertEqual(new,
+ {'name1':'hello this is name3 JAMMA goodbye singsong',
+ 'name2':'this is name3 JAMMA goodbye singsong',
+ 'name3':'this is name3 JAMMA',
+ 'name4':'this is name4',
+ 'another':'another'})
+
+ def test_resolve_options_missing_locals(self):
+ options = {'name1':'hello ${./localmissing1}',
+ 'name2':'${./localmissing2} goodbye ${defaultmissing1}',
+ 'name3':'this is name3 ${defaultmissing2}',
+ 'name4':'this is name4',
+ 'another':'another'}
+ from resolver import resolve_options
+ from resolver import MissingDependencyError
+ from resolver import LOCAL
+ from resolver import DEFAULT
+ try:
+ new = resolve_options('foo', {'foo':options})
+ except MissingDependencyError, why:
+ self.assertEqual(why.section_name, 'foo')
+ self.assertEqual(why.option_name, 'localmissing1')
+ self.assertEqual(why.value, 'hello ${./localmissing1}')
+ self.assertEqual(why.offender, 'name1')
+ else:
+ raise AssertionError('didnt raise!')
+
+ def test_resolve_ok(self):
+ options1 = {'name1_options1':'hello ${./name2_options1}',
+ 'name2_options1':'${./name3_options1} goodbye',
+ 'name3_options1':'${default_options1}',
+ 'name4_options1':'this is name4',
+ 'another_options1':'another'}
+ options2 = {'name1_options2':'hello ${./name2_options2}',
+ 'name2_options2':'${./name3_options2} goodbye',
+ 'name3_options2':'${default_options2}',
+ 'name4_options2':'this is name4',
+ 'another_options2':'another',
+ 'external1':'${options1/name3_options1}',
+ 'external2':'${options1/name2_options1}'}
+
+
+ defaults = {'default_options1':'default_for_options1',
+ 'default_options2':'default_for_options2'}
+ sections = {'options1':options1, 'options2':options2}
+
+ from resolver import resolve
+ resolved = resolve(sections, defaults)
+
+ self.assertEqual(resolved['options1'],
+ {'name1_options1':'hello default_for_options1 goodbye',
+ 'name2_options1': 'default_for_options1 goodbye',
+ 'name3_options1': 'default_for_options1',
+ 'name4_options1': 'this is name4',
+ 'another_options1': 'another'},
+ )
+ self.assertEqual(resolved['options2'],
+ {
+ 'name1_options2':'hello default_for_options2 goodbye',
+ 'name2_options2': 'default_for_options2 goodbye',
+ 'name3_options2': 'default_for_options2',
+ 'name4_options2': 'this is name4',
+ 'another_options2': 'another',
+ 'external1':'default_for_options1',
+ 'external2':'default_for_options1 goodbye'
+ }
+ )
+
+ def test_resolve_missingexternal(self):
+ options1 = {'name1_options1':'hello ${./name2_options1}',
+ 'name2_options1':'${./name3_options1} goodbye',
+ 'name3_options1':'${default_options1}',
+ 'name4_options1':'this is name4',
+ 'another_options1':'another'}
+ options2 = {'name1_options2':'hello ${./name2_options2}',
+ 'name2_options2':'${./name3_options2} goodbye',
+ 'name3_options2':'${default_options2}',
+ 'name4_options2':'this is name4',
+ 'another_options2':'another',
+ 'external1':'${options1/missing1}',
+ 'external2':'${options1/missing2}'}
+
+ defaults = {'default_options1':'default_for_options1',
+ 'default_options2':'default_for_options2'}
+ sections = {'options1':options1, 'options2':options2}
+
+ from resolver import resolve
+ from resolver import resolve_options
+ from resolver import MissingDependencyError
+ from resolver import LOCAL
+ from resolver import DEFAULT
+
+ try:
+ resolve(sections, defaults)
+ except MissingDependencyError, why:
+ self.assertEqual(why.section_name, 'options2')
+ self.assertEqual(why.option_name, 'external1')
+ self.assertEqual(why.value, '${options1/missing1}')
+ self.assertEqual(why.offender, 'missing1')
+ else:
+ raise AssertionError('No raise')
+
+ def test_resolve_directloop(self):
+ options1 = {'name1':'hello ${options2/name1}'}
+ options2 = {'name1':'hello ${options1/name1}'}
+
+ sections = {'options1':options1, 'options2':options2}
+
+ from resolver import resolve
+ from resolver import CyclicDependencyError
+
+ try:
+ resolve(sections)
+ except CyclicDependencyError, why:
+ self.assertEqual(
+ why[0],
+ {(None, 'options1'): ['options2'],
+ (None, 'options2'): ['options1']}
+ )
+ else:
+ raise AssertionError('No raise')
+
+ def test_resolve_indirectloop(self):
+ options1 = {'name1':'hello ${options2/name1}'}
+ options2 = {'name1':'hello ${options3/name1}'}
+ options3 = {'name1':'hello ${options1/name1}'}
+
+ sections = {'options1':options1, 'options2':options2,
+ 'options3':options3}
+
+ from resolver import resolve
+ from resolver import CyclicDependencyError
+
+ try:
+ resolve(sections)
+ except CyclicDependencyError, why:
+ self.assertEqual(
+ why[0],
+ {(None, 'options3'): ['options2'],
+ (None, 'options1'): ['options3'],
+ (None, 'options2'): ['options1']}
+ )
+ else: