Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Fetching contributors…

Cannot retrieve contributors at this time

4035 lines (3867 sloc) 148.469 kB
m4_include(/mcs/m4/worksp.lib.m4)
_NIMBUS_HEADER(Changelog)
_NIMBUS_HEADER2(n,n,n,n,y,n,n)
_NIMBUS_LEFT2_COLUMN
_NIMBUS_LEFT2_ABOUT_SIDEBAR(n,n,n,y)
_NIMBUS_LEFT2_COLUMN_END
_NIMBUS_CENTER2_COLUMN
_NIMBUS_IS_DEPRECATED
<h2>Changelog</h2>
<p>For cloud client changes, see
<a href="http://github.com/nimbusproject/nimbus/raw/master/cloud-client/nimbus-cloud-client-src/CHANGES.txt">here.</a>
</p>
<a name="2.8"> </a>
<i>2.8 - Summary</i>
<ul>
<li>
<p>
This release contains many important bug fixes as well as some new features.
</p>
</li>
<li>
<p>
Propagation by means of a file system copy: this can greatly decrease the boot time of VMs on systems where a fast shared file system exists.
</p>
</li>
<li>
<p>
VM image caching: this will greatly increase the boot performance for clouds with a base image that is launched often (a common use case). This works with any propagation method.
</p>
</li>
<li>
<p>
libvirt template support added. A cloud administrator can
now completely control the options sent to libvirt when starting
a virtual machine by editing the template file
</p>
</li>
<li>
<p>
ImportKeyPair is implemented in the EC2 protocols (details below).
</p>
</li>
</ul>
<i>2.8 - IaaS Services</i>
<ul>
<li>
<p>
Internally the Nimbus team has introduced a build and test system
with <a href="http://build.nimbusproject.org">Jenkins</a>.
This has helped us create a more robust and
well tested system as well as making it easier for the community
to make contributions.
</p>
</li>
<li>
<p>
The Nimbus Context Broker has been factored out into a separate tarball install
in addition to being part of the default IaaS installation.
</p>
</li>
<li>
<p>
Implemented EC2 ImportKeyPair operation. The old CreateKeyPair behavior is disabled (the behavior of allowing a "||" token based import mechanism with CreateKeyPair).
</p>
<p>
EC2 used to only have CreateKeyPair, now the EC2 protocols in Nimbus IaaS use ImportKeyPair and CreateKeyPair as intended as operations. The old behavior can be enabled if the administrator wants. (<a href="https://github.com/nimbusproject/nimbus/issues/36">enhancement 36</a>).
</p>
</li>
<li>
<p>
Documentation added to the <a href="http://www.nimbusproject.org/docs/latest/admin/z2c/">zero to cloud guide</a> describing
how to setup Nimbus to work with KVM.
</p>
</li>
<li>
<p>
<i>Bug Fixes</i>
</p>
<ul>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/59">ehcache unreliable for critical persistence</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/56">Termination should be retried if there is a fatal issue</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/49">accounting txt file not updated when corrupt VMs expire.</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/48">Add HTTPS unpropagation support.</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/46">Incorrect in_use in nimbus-nodes -l</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/44">User tools have buggy group-authz dir loading</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/37">Needed networks not being respected</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/63">New VM name was limited to a length which was too short in some circumstances</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/59">ehcache unreliable for critical persistence</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/67">Object copy with JetS3t fails</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/52">typica now works with Nimbus Query interface</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/56">Termination is retried if there is a fatal issue</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/65">Safer handling of the gridmap file</a>
</p>
</li>
</ul>
</li>
</ul>
<i>2.8 - LANTorrent</i>
<ul>
<li>
<p>
In this release LANTorrent was repackaged for more pythonic
distribution mechanisms and an important bug was fixed that
caused long term stability problems.
</p>
</li>
<li>
<p>
<i>Bug Fixes</i>
</p>
<ul>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/68">Bugs in the LANTorrent client object fixed. Transfers are more stable.</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/39">Lantorrent runaway after killing a newly created VM</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/57">LANTorrent can't be installed without an Internet connection</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/51">Add lantorrent tests to test suite.</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/32">Pilot updates</a>
</p>
</li>
</ul>
</li>
</ul>
<i>2.8 - Control Agents</i>
<ul>
<li>
<p>
libvirt template support added. A cloud administrator can
now completely control the options sent to libvirt when starting
a virtual machine by editing the template file
<tt class="literal">/opt/nimbus/etc/workspace-control/libvirt_template.xml</tt>
on the VMM nodes.
</p>
</li>
<li>
<p>
A cache of propagated images can now be kept on each VMM. Before
an image is propagated the cache is checked for an image with a
matching checksum. If found that image is used and no propagation
is needed which can save a significant amount of time. This will
work with any propagation mechanism.
</p>
</li>
<li>
<p>
The cp (copy) propagation driver has been introduced in this release.
This prepares an image for use by a VMM by copying it directly out
of the Cumulus data store and into a temporary location from which it
will be booted. For users with shared fast file systems this can
bring great performance benefits.
</p>
</li>
<li>
<p>
<i>Bug Fixes</i>
</p>
<ul>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/33">virtio support in generated libvirt xml</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/55">Workspace control verifies old unpropagation URL when saving with a new name</a>
</p>
</li>
</ul>
</li>
</ul>
<i>2.8 - Additional Notes</i>
<ul>
<li>
<p>
List of
<a href="http://github.com/nimbusproject/nimbus/compare/nimbus-release-2.7...nimbus-release-2.8RC1">all
commits</a>
between Nimbus 2.7 final and Nimbus 2.8 RC1.
</p>
</li>
</ul>
<a name="2.7"> </a>
<i>2.7 - Summary</i>
<ul>
<li>
<p>
Support for backfill and spot VM instances was introduced. Backfill instances are configured by the
administrator and automatically start on idle resources. When user requests are received, backfill
instances are preempted and terminated.
</p>
<p>
Spot instances are similar to backfill, but are initiated by the user. Users may "bid" on VM slots and
compete for available resources. Like backfill, spot instances can be preempted and terminated at any
time. Preemption occurs when either a real (non-spot) request or a spot request with a higher bid is
received and no other resources are available.
</p>
</li>
<li>
<p>
The EC2 Query interface has substantially improved compatibility with EC2 clients. The generated XML
is now largely identical.
</p>
</li>
<li>
<p>
Idempotent instance creation is now supported, via the EC2 interfaces.
</p>
</li>
<li>
<p>
There are also numerous bug fixes and minor enhancements.
</p>
</li>
</ul>
<i>2.7 - IaaS Services</i>
<ul>
<li>
<p>
Support for backfill and spot VM instances was introduced.
See the <a href="admin/reference.html#backfill-and-spot-instances">documentation</a> for more information.
</p>
</li>
<li>
<p>
Idempotent instance creation is now supported, via the EC2 interfaces.
</p>
</li>
<li>
<p>
The metadata server now supports listening on multiple network interfaces. The URL provided to
VMs can be configured based on the requested networks.
</p>
</li>
<li>
<p>
The EC2 Query interface has substantially improved compatibility with EC2 clients. The generated XML
is now largely identical. The name space version in responses matches the request version
regardless of the actual version. Additionally, version 1 signatures are now supported, to enable
some clients that never updated to version 2.
</p>
</li>
<li>
<p>
The <tt class="literal">scripts/check-dependencies.sh</tt> program has been added to the services
installer to help detect dependency problems before starting the installation process.
</p>
</li>
<li>
<p>
Added support for specifying multiple CPU architectures in <tt class="literal">vmm.conf</tt>. Detailed in
<a href="https://github.com/nimbusproject/nimbus/issues/closed#issue/15">issue 15</a>.
</p>
</li>
<li>
<p>
<i>Bug Fixes</i>
</p>
<ul>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/closed#issue/27">Fixed bug with user-selected kernels in workspace-control</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/issue/25">Fixed bug in nimbus-edit-user regarding group management</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/closed#issue/7">Fix unpropagation exception in lantorrent</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/closed/#issue/9">Allow metadata server queries to use "/latest/"</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/closed/#issue/10">nimbus-remove-user --help dumps stack</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/closed/#issue/13">Installer broken with Python 2.7.1</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/closed/#issue/17">Nimbus EC2 Query interface should support signature version 1</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/closed/#issue/19">Query interface generated XML isn't identical (enough) to EC2</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/closed/#issue/20">nimbus-reset-state not sourcing virtual env</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/closed/#issue/21">EC2 query interface parameter lists break on POST requests</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/closed/#issue/22">cache mechanism not surviving restarts</a>
</p>
</li>
<li>
<p>
<a href="https://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7097">nimbus-new-user doesn't accept relative paths</a>
</p>
</li>
</ul>
</li>
</ul>
<i>2.7 - LANTorrent</i>
<ul>
<li>
<p>
Improved logging output.
</p>
</li>
<li>
<p>
Added checksum checks to the data as it is streaming. Peers report back their values and errors are
thrown for all non-matching cases.
</p>
</li>
<li>
<p>
Altered the use of nonblocking sockets to have a more efficient process.
</p>
</li>
<li>
<p>
<i>Bug Fixes</i>
</p>
<ul>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/closed#issue/24">LANTorrent transfers don't correctly check the result of send()</a>
</p>
</li>
</ul>
</li>
</ul>
<i>2.7 - Control Agents</i>
<ul>
<li>
<p>
Added support for propagation from an HTTPS server with X509 proxy authentication. See
<a href="https://github.com/nimbusproject/nimbus/issues/closed/#issue/12">issue 12</a> for
details.
</p>
</li>
<li>
<p>
<i>Bug Fixes</i>
</p>
<ul>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/closed/#issue/18">mount-alter.sh terminating without umount</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/closed/#issue/14">Add Locking to workspace pilot</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/closed#issue/23">Gzipped images cannot be restarted</a>
</p>
</li>
<li>
<p>
<a href="https://github.com/nimbusproject/nimbus/issues/issue/27">User-selected kernels broken in workspace-control</a>
</p>
</li>
</ul>
</li>
</ul>
<i>2.7 - Additional Notes</i>
<ul>
<li>
<p>
List of
<a href="http://github.com/nimbusproject/nimbus/compare/nimbus-release-2.6...nimbus-release-2.7">all
commits</a>
between Nimbus 2.6 final and Nimbus 2.7 final.
</p>
</li>
</ul>
<a name="2.6"> </a>
<br><hr><br>
<i>2.6 - Summary</i>
<ul>
<li>
<p>
This is the first release of LANTorrent, a fast multicast file distribution protocol designed to saturate all the links in a switch. This works best for situations with a local area network, large files, and many cooperative peers that need the same file -- i.e., it is geared towards IaaS image propagation (but could work in other scenarios).
</p>
</li>
<li>
<p>
Dynamic VMM configuration management is now possible with the new <tt class="literal">nimbus-nodes</tt> program. This allows you to adjust the resource pool while the service is still running, adding and removing resources on the fly.
</p>
</li>
<li>
<p>
The context broker has a new client-side HTTP/REST interface in addition to WSRF. Users authenticate with the same tokens they use for Cumulus and the Elastic Query API. This opens up the context broker for several new client integrations including ones using alternate languages.
</p>
</li>
<li>
<p>
Cumulus now supports the S3 COPY operation.
</p>
</li>
<li>
<p>
A new upgrade tool is introduced, <tt class="literal">install-from</tt>. This assists with updating a previous Nimbus installations (2.5 and higher). It currently requires that the old Nimbus services are all stopped and no VMs are deployed.
</p>
</li>
<li>
<p>
<tt class="literal">nimbus-import-users</tt> is a new program that allows multiple cloud installations to coordinate user information with each other.
</p>
</li>
<li>
<p>
<tt class="literal">nimbus-public-image</tt> is a new program that allows administrators to register VM images in the local Cumulus repository that will be usable for all users of the system (but of course only stored once on disk).
</p>
</li>
<li>
<p>
As usual, some bug fixes and minor enhancements.
</p>
</li>
</ul>
<i>2.6 - Installer</i>
<ul>
<li>
<p>
<i>install-from program</i>
</p>
<p>
A new upgrade tool is introduced, this assists with updating a previous Nimbus installations (2.5 and higher). It currently requires that the old Nimbus services are all stopped and no VMs are deployed.
</p>
<p>
You use this program instead of the regular <tt class="literal">install</tt> instructions. Most of the installation process is exactly as normal, but <tt class="literal">install-from</tt> will base most of the initial configurations on your old installation.
</p>
<p>
It is not yet entirely Magic&trade; so you will need to follow the instructions in the <a href="admin/upgrading.html">upgrade guide</a> to make it happen.
</p>
</li>
</ul>
<i>2.6 - IaaS Services</i>
<ul>
<li>
<p>
<i>nimbus-nodes program</i>
</p>
<p>
Dynamic VMM configuration management is now possible with the new <tt class="literal">nimbus-nodes</tt> program. This allows you to adjust the resource pool while the service is still running, adding and removing resources on the fly.
</p>
<p>
For information and instructions, see <a href="admin/reference.html#resource-pool">this section</a> of the administrator guide.
</p>
</li>
<li>
<p>
<i>nimbus-import-users</i>
</p>
<p>
This is a new program that allows multiple cloud installations to coordinate user information with each other.
</p>
<p>
You can "dump" user information into a text file and this allows you to import it elsewhere. This allows administrators (or larger groups) to coordinate users across clusters/installations.
</p>
<p>
It is compatible with the <tt class="literal">nimbus-list-users</tt> tool, you can for example run things like <tt class="literal">ssh nimbus@othercloud nimbus-list-users % | nimbus-import-users</tt>
</p>
</li>
<li>
<p>
<i>nimbus-public-image</i>
</p>
<p>
This is a new program that allows administrators to register VM images in the local Cumulus repository that will be usable for all users of the system.
</p>
<p>
These are read-only images that all Nimbus users will see as an option in their "--list" output for running.
</p>
<p>
To save these images as new, derivative templates, the user would need to run "--save --new-name" and create a new object.
</p>
</li>
<li>
<p>
<i>Passthrough configuration</i>
</p>
<p>
When an image file URL scheme is not the normal "cumulus://", previously the service would always pass this along to the VMM to interpret on its own.
</p>
<p>
Now you can explicitly set what propagation to use for "cumulus://" and what passthrough schemes are allowed (if any).
</p>
<p>
Enhancement <a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7092">7092</a>
shows where to find the configuration.
</p>
</li>
<li>
<p>
<i>Bug Fixes</i>
</p>
<ul>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7084">Bug 7084 - Explicit MAC->IP mappings are ignored after service restart</a>
</p>
</li>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7088">Bug 7088 - newname/alternative unpropagated target fails</a>
</p>
</li>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7096">Bug 7096 - User mgmt tools group number validation is excessive</a>
</p>
</li>
</ul>
</li>
</ul>
<i>2.6 - Cumulus</i>
<ul>
<li>
<p>
<i>COPY</i>
</p>
<p>
The <a href="http://docs.amazonwebservices.com/AmazonS3/2006-03-01/index.html?UsingCopyingObjects.html">COPY</a>
operation allows you to duplicate an object stored in the system without having to do any data transfer to your client machine.
</p>
<p>
It was tested with <tt class="literal">s3cmd</tt> and <tt class="literal">boto</tt>. Future cloud-client versions could use this more natively for image duplication/rename.
</p>
</li>
<li>
<p>
<i>Redirection</i>
</p>
<p>
In order to make Cumulus a scalable service we added a feature
which takes advantage of the temporary redirection in error in
the
<a href="http://docs.amazonwebservices.com/AmazonS3/latest/index.html">Amazon S3 Protocol</a>. A Cumulus administrator can create a text file
full of cloned Cumulus server contact strings. A maximum number
of allowed connected clients is associated with each replicated
Cumulus server. If, after a client connects and authenticates,
that number is exceeded, then a <i>301 Temporary Redirect</i>
error is return to the client instructing it to try a different
Cumulus server.
</p>
</li>
<li>
<p>
<i>Postgres DB for Authentication</i>
</p>
<p>
Minor changes were made to the service to allow an admin to
configure Cumulus so that it will use postgres instead of the
default SQLite DB.
</p>
</li>
<li>
<p>
<i>Bug Fixes</i>
</p>
<ul>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7083">Bug 7083 - s3cmd setacl public fails</a>
</p>
</li>
</ul>
</ul>
</ul>
<i>2.6 - LANTorrent</i>
<ul>
<li>
<p>
<i>Introducing LANTorrent!</i>
</p>
<p>
This is the first release of LANTorrent, a fast multicast file distribution protocol designed to saturate all the links in a switch.
</p>
<p>
This works best for situations with a local area network, large files, and many cooperative peers that need the same file -- i.e., it is geared towards IaaS image propagation (but could work in other scenarios).
</p>
<p>
It is disabled by default and takes extra steps to activate. See
<a href="admin/reference.html#lantorrent">this section</a> of
the administrator's guide for instructions as well as
<a href="admin/reference.html#lantorrent-protocol">detailed explanations</a> of how it works.
</p>
</li>
</ul>
<i>2.6 - Context Broker</i>
<ul>
<li>
<p>
<i>HTTP/REST support</i>
</p>
<p>
The context broker has a new client-side HTTP/REST interface in addition to WSRF. Users authenticate with the same tokens they use for Cumulus and the Elastic Query API. This opens up the context broker for several new client integrations including ones using alternate languages.
</p>
<p>
For example there is a
<a href="http://github.com/nimbusproject/Nimboss">prototype
client api</a> being built that integrates with the REST broker.
</p>
</li>
</ul>
<i>2.6 - Control Agents</i>
<ul>
<li>
Support for LANTorrent (see above). The relevant configurations are
in the "propagation.conf" file.
</li>
<li>
<p>
<i>Bug Fixes</i>
</p>
<ul>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7101">Bug 7101 - dhcp-config.sh is unreliable on shared filesystem</a>
</p>
</li>
</ul>
</li>
</ul>
<i>2.6 - Cloud Client</i>
<ul>
<li>
<p>
<i>Use cloud-client 16 or higher</i>
</p>
<p>
As of this release, the latest cloud client is
<a href="http://github.com/nimbusproject/nimbus/raw/master/cloud-client/nimbus-cloud-client-src/CHANGES.txt">number
17</a>. Cloud client 16 will also work, but 15 or before will not
due to the introduction of Cumulus in Nimbus 2.5.
</p>
</li>
</ul>
<i>2.6 - Additional Notes</i>
<ul>
<li>
<p>
List of
<a href="http://github.com/nimbusproject/nimbus/compare/nimbus-release-2.5...nimbus-release-2.6">all
commits</a>
between Nimbus 2.5 final and 2.6 final.
</p>
</li>
<li>
<p>
Problems addressed in RC2 vs. RC1
(<a href="http://github.com/nimbusproject/nimbus/compare/nimbus-release-2.6-RC1...nimbus-release-2.6-RC2">All commits</a>)
</p>
<ul>
<li>
<p>
Issue with image md5sums not getting recorded in certain situations.
</p>
</li>
<li>
<p>
Ran into situations where two seconds was not enough time to get a database connection from a pool of connections. Decided to make this infinite (for correctness purposes, the timeouts are not handled correctly).
</p>
</li>
<li>
<p>
The "bad CPU architecture" remote error message was incorrectly stated.
</p>
</li>
<li>
<p>
The build and test suite runs on more platforms now by using the /tmp directory for special files.
</p>
</li>
<li>
<p>
Service support for the new admin client "nimbus-nodes" did not handle a missing configuration gracefully.
</p>
</li>
<li>
<p>
Cumulus includes a new scalability test.
</p>
</li>
<li>
<p>
The installer did not handle the lack of the "uuidgen"
command well, using a library instead of relying on that
commandline being there.
</p>
</li>
<li>
<p>
The service was giving a bad error message for image URL schemes (besides 'file' or 'cumulus') that were not explicitly authorized.
</p>
</li>
</ul>
</li>
</ul>
<a name="2.5"> </a>
<br><hr><br>
<i>2.5 - Summary</i>
<ul>
<li>
<p>
<i>We are happy to announce the first version of Cumulus!</i>
</p>
<p>
Cumulus is a storage cloud implementation compatible with the Amazon Web Services S3 REST API (with small exceptions), it works seamlessly with S3 clients such as s3cmd, jets3t, and boto. In addition, it offers extra functionality such as disk quota enforcement.
</p>
<p>
Cumulus replaces the current GridFTP-based upload and download of VM images: it integrates tightly in a Nimbus installation as the VM image repository solution. And it can be also installed on its own to manage a storage cloud: a Nimbus IaaS installation needs Cumulus but not vice versa.
</p>
<p>
Read more below in the Cumulus section of the changelog.
</p>
</li>
<li>
<p>
<i>Zero To Cloud installation process</i>
</p>
<p>
Besides being the first Cumulus release, the other major event of 2.5 is the first release of the "Zero To Cloud" installation process.
</p>
<p>
Many of the new programs and enhancements in this release work together with the goal of providing a seamless installation process. Read more in the changelog below about each new feature, highlights include:
</p>
<ul>
<li>
<p>
A new user management system: users can be added and managed quickly. Their credentials are created on the fly (but there is still an alternative path for coexistence with your own credential system).
</p>
</li>
<li>
<p>
All of the new user management tools include a machine parsable mode that makes them easy to incorporate into your own scripts.
</p>
</li>
<li>
<p>
Tight user integration with Cumulus: user management includes user setup with the image repository as well as the IaaS services.
</p>
</li>
<li>
<p>
Tight integration between the user management tools and the web application. This allows administrators to add a user and instantly receive the secure URL that the new user will visit to pick up his credentials and cloud.properties file.
</p>
</li>
<li>
<p>
No unix account separation or root account needed for Cumulus -- installation is easier because the IaaS central services and Cumulus can live in the same, non-root unix account.
</p>
</li>
<li>
<p>
The IaaS services are able to consult information in a database about files the remote user wants to launch. When the "outside" namespace of the file is Cumulus based, there is a translation to an "inside" location and mechanism. This technique is now used to encapsulate propagation mechanisms, allowing for an easy way to introduce new and faster methods.
</p>
</li>
</ul>
<p>
All of this combined to allow us to, among other things, provide a new installation process for Nimbus. You can now tend to the central service node installation separately and very quickly. Further, you install in a "fake" mode that doesn't actually invoke hypervisor machines yet. Doing all of this allows you to know that the central services are configured and working (including security), leaving VMM related matters to be tackled separately.
</p>
</li>
<li>
<p>
<i>Integration with a central, site DHCPd</i>
</p>
<p>
Instead of hosting a private DHPCd for each VMM, Nimbus now supports (by default) editing a lease file for a central DHCPd on the LAN. This makes it easier to install and integrate with existing infrastructure. The old method is supported as an advanced configuration. Details below in the IaaS Services and workspace-control sections.
</p>
</li>
<li>
<p>
<i>New scheduling options</i>
</p>
<p>
When using the default scheduler, the VMM selection process is now more configurable:
</p>
<p>
There is a <i>round-robin</i> configuration that looks for available nodes with the highest percentage of free RAM. And a <i>greedy</i> configuration that looks for available nodes with the lowest percentage of free RAM. Details below.
</p>
</li>
<li>
<p>
<i>New options for pilot based scheduling</i>
</p>
<p>
The pilot module now sends more information to the local resource manager (such as Torque). This is for accounting and potentially authorization purposes, you can now more easily track remote users and more information about the launch via the resource manager itself.
</p>
</li>
<li>
<p>
<i>New alternative propagation methods</i>
</p>
<p>
There are two new contributed propagation drivers for getting VM images to the hypervisor nodes: HTTP and HDFS. These are for use outside of the cloud client currently, and you need to configure Nimbus to allow for non-Cumulus-based image locations.
</p>
<p>
These are for use on sites where the remote user knows more about image locations than the Cumulus name for it. The HTTP driver is also good for having a central image repository (you can specify trusted hosts to pull from).
</p>
</li>
<li>
<p>
<i>Better performing blankspace: physical partition leases</i>
</p>
<p>
In order to support very large temporary space partitions for VMs, and fast access to them, Nimbus now allows you to configure a list of physical partitions to lease out to incoming Nimbus VMs on each node (formatted for each use).
</p>
</li>
<li>
<p>
<i>Support for user-requested number of cores</i>
</p>
<p>
Nimbus now allows remote users to specify the number of cores the VM(s) should have. The authorization policies now support this as well, allowing administrators to specify the maximum allowed for each user/group.
</p>
</li>
<li>
<p>
<i>New service node dependency requirements</i>
</p>
<p>
See the Cumulus section for specifics.
</p>
</li>
<li>
<p>
<i>Bug fixes and smaller enhancements</i>
</p>
<p>
See each section below for specifics.
</p>
</li>
<li>
<p>
<i>For developers</i>
</p>
<p>
Highlights for software developers include a new internal testing
framework for the RM API, tight integration with IntelliJ IDEA if
you use that IDE, a propagate-only mode (no VMM required), a way to
quickly overwrite just jar files in an installation, and better
tarball/release management.
</p>
</li>
</ul>
<i>2.5 - Installer</i>
<ul>
<li>
<p>
<i>Zero To Cloud Guide</i>
</p>
<p>
You can now follow <a href="admin/z2c/index.html">one installation document</a> step by step and achieve a setup that works for the cloud-client program.
</p>
</li>
<li>
<p>
Now installs and configures Cumulus.
</p>
</li>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7018">Enhancement 7018 - Installation produces a log file</a>
</p>
</li>
<li>
<p>
Now works with relative and absolute paths and catches more
error/warning situations.
</p>
</li>
<li>
<p>
The autoconfigure process is now an option of nimbus-configure.
</p>
<p>
See the installation guide for details. The autoconfigure process
is also now more in line with the installation guide: it does less
so that it fits better in the guide's step by step instructions.
</p>
</li>
<li>
<p>
<i>For Developers</i>
</p>
<ul>
<li>
<p>
The build system now forces the Java pieces to be JRE 1.5
compatible.
</p>
</li>
<li>
<p>
<i>scripts/check-jars.sh</i> makes it easy to check JRE 1.5
compatibility on any directory of jars (including
subdirectories). For example, a Nimbus installation.
</p>
</li>
<li>
<p>
<i>scripts/jars-build-and-install.sh</i>
</p>
<p>
Lets you change Java source code and just build and install
the jars and get the changes straight into an existing
installation for immediate experimentation/testing.
</p>
</li>
<li>
<p>
<i>scripts/make-dist-remote.sh</i> makes it easier to generate release and nightly tarballs and get them online with just one command.
</p>
</li>
</ul>
</li>
<li>
<p>
<i>Bug Fixes</i>
</p>
<ul>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7020">Bug 7020 - installer has scary warnings that don't have an effect</a> (boo)
</p>
</li>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7030">Bug 7030 - nimbusctl should wait longer to check process</a>
</p>
</li>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7031">Bug 7031 - installer has erroneous warnings about wsdd files not being found</a>
</p>
</li>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7041">Bug 7041 - ./install should warn if user is root</a>
</p>
</li>
</ul>
</li>
</ul>
<i>2.5 - IaaS Services</i>
<ul>
<li>
<p>
<i>A new user management system</i>
</p>
<p>
The tight integration with Cumulus and a new authorization database have resulted in a way to create a cohesive "user system" for Nimbus clouds that allows the administrator to quickly add, delete, and edit users. In $NIMBUS_HOME/bin, you will find several new programs, including:
</p>
<ul>
<li>nimbus-new-user</li>
<li>nimbus-edit-user</li>
<li>nimbus-list-users</li>
<li>nimbus-remove-user</li>
</ul>
<p>
Each of these commands work well from the terminal but also include options to make it easy to do a wide array of tasks from scripts.
</p>
<p>
In the case of the EC2 Query protocol, the credentials are no longer checked against a text file full of keys: they now integrate with these tools, it is all database driven.
</p>
<p>
In order to continue supporting X509 certificates, a map file is still populated with certificate names ("DN"s), but these new commandline utilities should be the thing editing those files in order to keep them in sync. You should use these tools for all user management now.
</p>
</li>
<li>
<p>
<i>nimbus-new-user program</i>
</p>
<p>
The nimbus-new-user program is a particular highlight, it replaces the "cloud-admin.sh" program.
</p>
<p>
Right out of the box you can use this to generate new users. It will produce a X509 certificate/key, query ID/key (for use with Cumulus and EC2 Query interfaces), and the cloud.properties file that the user should use. It allows you to specify the set of authorization/credits rules to apply. It also integrates with the Nimbus web application if you choose: it will produce a special single-use URL that you can email to new users to pick up the newly created credentials and the user-specific <i>cloud.properties</i> file.
</p>
<p>
There is sample help output and sample usages output in the comments of <a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7035">Enhancement 7035 - new create-user process</a>
</p>
</li>
<li>
<p>
<i>nimbus-reset-state program</i>
</p>
<p>
The <i>nimbus-reset-state</i> program allows administrators to reset
long term accounting data, running state data that tracks the
cluster, and Cumulus users and files in bulk.
</p>
<p>
It is a destructive program, please read the help output carefully.
For safety, it includes an "are you sure?" prompt if you don't pass
it a "force" argument.
</p>
<p>
Addresses <a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7021">Bug 7021</a>.
</p>
</li>
<li>
<p>
<i>nimbus-version program</i>
</p>
<p>
The <i>nimbus-version</i> program allows you to examine/remember
the details of your installation. A metadata file is inserted by
the release builder which records things like the exact version,
build date, and git commit identifier that the release was built
from.
</p>
</li>
<li>
<p>
<i>Web application enhancement</i>
</p>
<p>
The web application contains an update to allow it to integrate with the nimbus-new-user program as well as distribute the personalized cloud.properties file that nimbus-new-user can produce.
</p>
<p>
Once you configure the web application (it is not enabled by default), nimbus-new-user can automatically set up a pickup URL for new users. This is a short-lived, obscure URL that you can share with new users to pickup their new credentials (this can be for either X509 or query tokens or both).
</p>
<p>
Because the nimbus-new-user tool can be put into a machine parsable mode, the URL can be incorporated programmatically into a script you write to send a welcome email.
</p>
</li>
<li>
<p>
<i>Support for user-requested number of cores.</i>
</p>
<p>
This can be limited by the administrator using the group authorization system.
</p>
<p>
See <a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=6999">Enhancement 6999 - Integrate multi-core support and add authorization handling for it</a>
</p>
<p>
And the cloud.properties file you distribute to cloud users can now contain the suggested number as of cloud client #16 (<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7062">Enhancement 7062</a>).
</p>
<p>
<b>Contributed by Patrick Armstrong, University of Victoria</b>
</p>
</li>
<li>
<p>
<i>A new propagation namespace system</i>
</p>
<p>
The service will treat image URIs as an "external" namespace that are authorized via the new authorization database. They are translated into an internal representation that allows propagation to actually occur (and means that propagation mechanisms can now be entirely pluggable).
</p>
<p>
Some propagation drivers may bypass this namespace if you allow, such as the new HTTP and HDFS mechanisms (see the workspace-control section below).
</p>
</li>
<li>
<p>
<i>New scheduling options</i>
</p>
<p>
When using the default scheduler, the VMM selection can now happen in one of two ways, driven by configuration:
</p>
<p>
1. A <i>round-robin</i> configuration in resource-locator-ACTIVE.xml (this is the default mode). This looks for matching nodes (enough space to run, appropriate network support, etc.) with the highest percentage of free RAM. If there are many equally free nodes it will pick randomly from those. As should be clear, this favors entirely empty nodes first.
</p>
<p>
2. A <i>greedy</i> configuration in resource-locator-ACTIVE.xml. This looks for matching nodes (enough space to run, appropriate network support, etc.) with the lowest percentage of free RAM. If there are many equally unfree nodes it will pick randomly from those.
</p>
<p>
See: <a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7012">Enhancement 7012 - Better round-robin scheduling of multiple VMs per node</a>
</p>
</li>
<li>
<p>
<i>New options for pilot based scheduling</i>
</p>
<p>
The pilot module now sends more information to the local resource manager (such as Torque). This is for accounting and potentially authorization purposes, you can now more easily track remote users and more information about the launch via the resource manager itself.
</p>
<p>
Extra launch information: the pilot module sends the memory request for the VMs, administrators can use this information for accounting purposes.
</p>
<p>
Extra account information: the pilot module adds "-A /remote/user/dn" to the submission which allows administrators to use this information for accounting purposes. The "-A" flag is part of the PBS standard and it is up to the implementation/configuration to ignore the information or do something with it such as accounting.
</p>
<p>
<b>Contributed by Patrick Armstrong, University of Victoria</b>
</p>
</li>
<li>
<p>
<i>Integration with site DHCPd</i>
</p>
<p>
Instead of hosting a private DHPCd for each VMM, Nimbus now supports (by default) editing a lease file for a central DHCPd on the LAN. This makes it easier for many administrators to integrate with existing infrastructure (and in most cases speeds up the installation).
</p>
<p>
You will configure the DHCP server to respond to specific MAC addresses with specific IP addresses. Each time you change your network pool in Nimbus, you must update the DHCPd at the same time with the new information.
</p>
<p>
The old method of having a DHCPd run on each VMM is still supported as an advanced configuration.
</p>
<p>
See: <a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7066">Enhancement 7066 - Allow VMs to use site DHCP server</a>
</p>
</li>
<li>
<p>
The EC2 interfaces now return IP address as well as hostname. This was a feature added to the EC2 protocol
API version 2009-07-15. This addresses
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7073">Enhancement 7073</a>.
</li>
<li>
<p>
<i>For developers</i>
</p>
<p>
There is a new internal testing framework for the RM API and there is also now tight integration with IntelliJ IDEA if you have access to that.
</p>
<p>
See: <a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7045">Enhancement 7045 - Make Nimbus services installation portable</a>
</p>
<p>
And: <a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7046">Enhancement 7046 - Add test suite infrastructure</a>
</p>
<p>
The testing framework also includes enhancements to use Spring <i>@DirtiesContext</i> <b>contributed by Paulo Motta, Google Summer of Code</b>.
</p>
</li>
<li>
<p>
<i>Better event logging</i>
</p>
<p>
The service now logs more information to the logs and to the "accounting-events.txt" file.
</p>
<p>
See: <a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7043">Enhancement 7043 - add more information to event log files (might break anything parsing them)</a>
</p>
<p>
<b>Collaboration with Patrick Armstrong, University of Victoria</b>
</p>
</li>
<li>
<p>
<i>Bug Fixes</i>
</p>
<ul>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=6998">Bug 6998 - VM request fails when there is no customization task with reference client</a>
</p>
</li>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7015">Bug 7015 - Confusing error message: No resource pool has an applicable entry</a>
<br><b>Contributed by Paulo Motta, Google Summer of Code</b>
</p>
</li>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7016">Bug 7016 - Error when there is no DNS setting</a>
</p>
</li>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7019">Bug 7019 - jars missing from lib/services</a>
<br><b>Contributed by Paulo Motta, Google Summer of Code</b>
</p>
</li>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7067">Bug 7067 - MAC addresses regenerated at every service start</a>
</p>
</li>
</ul>
</li>
</ul>
<i>2.5 - Cumulus</i>
<ul>
<li>
<p>
<i>We are happy to announce the first version of Cumulus!</i>
</p>
<p>
Cumulus is a storage cloud implementation compatible with the Amazon Web Services S3 REST API (with small exceptions), it works seamlessy with S3 clients such as s3cmd, jets3t, and boto. In addition, it offers extra functionality such as disk quota enforcement.
</p>
<p>
Cumulus replaces the current GridFTP-based upload and download of VM images: it integrates tightly in a Nimbus installation as the VM image repository solution. And it can be also installed on its own to manage a storage cloud: a Nimbus IaaS installation needs Cumulus but not vice versa.
</p>
</li>
<li>
<p>
<i>Cumulus requirements</i>
</p>
<p>
Cumulus requires Python 2.5+ (but not 3.x) with SQLite support [1] and gcc [2] on the central service node.
</p>
<p>
[1] - SQLite should be included by default in any Python 2.5 installation but we've run into some distributions removing that.
</p>
<p>
[2] - gcc is not strictly required if you install packages to your Python site-packages: pyOpenSSL and Twisted 8.2+. Otherwise the installer will do some Python->C bridge code compilation to get dependencies enabled. The service installer does not run as root (and nothing on the service node needs root anymore), so in the case where you use gcc like this, those dependencies will be installed inside a virtual Python environment just for Cumulus code.
</p>
</li>
<li>
<p>
<i>Support for quotas</i>
</p>
<p>
Cumulus allows administrators to set disk space limits on a per user basis. By default users are created with unlimited space. See the <a href="admin/reference.html#cumulus-quotas">quota documentation</a> for more information.
</p>
</li>
<li>
<p>
<i>GridFTP is not used with this release</i>
</p>
<p>
Cumulus is used instead. The impact this will have on current cloud users is discussed in the <a href="http://lists.globus.org/pipermail/workspace-announce/2010-June/000020.html">email introducing Cumulus</a>.
</p>
</li>
<li>
<p>
Cumulus will fall under the same best-effort support policy as other Nimbus components. As always, using the mailing lists is advised since you will get an opportunity for help from the community (this does happen and it is great to see).
</p>
</li>
<li>
<p>
<i>Learn more</i>
</p>
<p>
To learn about Cumulus in depth, see the Cumulus <a href="faq.html#cumulus">FAQ entries</a> and reference <a href="admin/reference.html#cumulus">documentation</a>.
</p>
</li>
</ul>
<i>2.5 - Context Broker</i>
<ul>
<li>
<p>
<i>Bug Fixes</i>
</p>
<ul>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7042">Bug 7042 - Context broker needs to create certificates with different serial number</a>
<br><b>Contributed by Pierre Riteau, University of Rennes 1</b>
</p>
</li>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7061">Bug 7061 - Context broker fails to handle a retrieve situation</a>
</p>
</li>
</ul>
</li>
</ul>
<i>2.5 - Control Agents</i>
<ul>
<li>
<p>
<i>DHPCd changes</i>
</p>
<p>
By default, workspace-control is now configured to use an off-node DHCPd server. That server should be configured according to instructions using the <a href="admin/z2c/index.html">Zero To Cloud Guide</a>.
</p>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7066">Enhancement 7066 - Allow VMs to use site DHCP server</a>
</p>
<p>
In order to enable the old method of having a DHCPd run on each VMM, you need to carry out the advanced "localdhcp" configurations described in workspace-control's "etc/workspace-control/networks.conf" file.
</p>
</li>
<li>
<p>
<i>New alternative propagation methods</i>
</p>
<p>
The contributed HTTP and HDFS propagation drivers currently integrate without the propagation namespace awareness: normally in the cloud configuration, remote users will specify a "cumulus://" URL which is translated to an internal propagation location and mechanism.
</p>
<p>
They are for use on sites where the remote user knows more about image locations than the Cumulus name for it.
</p>
<p>
These alternative propagation methods need to be manually enabled, consult workspace-control's "etc/workspace-control/propagation.conf" file.
</p>
</li>
<li>
<p>
<i>HTTP propagation driver</i>
</p>
<p>
The HTTP driver is good for having a central image repository on the internet for image files, using this you will not need to get changes to images back to each cloud's repository.
</p>
<p>
Enable via workspace-control's "etc/workspace-control/propagation.conf" file. And you can specify trusted hosts to pull from, see the central Nimbus node's "services/etc/nimbus/workspace-service/global-policies.conf" file
</p>
<p>
<b>Contributed by Patrick Armstrong, University of Victoria</b>
</p>
</li>
<li>
<p>
<i>HDFS propagation driver</i>
</p>
<p>
The HDFS propagation driver is good for sites experimenting with fast propagation techniques. It requires a local HDFS installation as well as an installation of the client tools on each VMM. See workspace-control's "etc/workspace-control/propagation.conf" file.
</p>
<p>
<b>Contributed by Matt Vliet, Google Summer of Code, University of Victoria</b>
</p>
</li>
<li>
<p>
<i>Better performing blankspace: physical partition leases</i>
</p>
<p>
In order to support very large temporary space partitions for VMs, and fast access to them, Nimbus now allows you to configure a list of physical partitions to lease out to incoming Nimbus VMs on each node.
</p>
<p>
This is an non-default and preliminary feature.
</p>
<p>
The partition will be presented to a configured "/dev" device inside every Nimbus VM that is started on the VMM node. If there is a conflict with a mountpoint that the user's request contains, the administrator has a choice of rejecting the request or not providing the blankspace.
</p>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7065">Enhancement 7065 - better performing blankspace: physical partition leases</a>
</p>
</li>
<li>
<p>
<i>For Developers</i>
</p>
<ul>
<li>
<p><i>Propagate-only mode</i></p>
<p>
New propagation-only mode for workspace-control that helps
with development and testing of fast/smart propagation
techniques. Does not require libvirt or any sudo
privileges to work with the central Nimbus service and
fully exercise propagation.
</p>
<p>
See the embedded notes in the new 'src/propagate-only.sh'
script in workspace-control.
</p>
</li>
<li>
<p>
A workspace-control helper script <i>bin/fakesudo</i> makes
it easier to work in situations where workspace-control is
running entirely as root (not a normal situation).
</p>
</li>
</ul>
</li>
<li>
<p>
<i>Bug Fixes</i>
</p>
<ul>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7051">Bug 7051 - tap:aio mode xml is wrong for libvirt</a>
<br><b>Contributed by Pierre Riteau, University of Rennes 1</b>
</p>
</li>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7060">Bug 7060 - mount+alter needs locking</a>
</p>
</li>
</ul>
</li>
</ul>
<i>2.5 - Cloud Client</i>
<ul>
<li>
<p>
<i>Nimbus 2.5 requires the newest cloud client</i>
</p>
<p>
The current cloud client as of this release is cloud-client-016. The 2.5 service release will <b>not</b> work with previous cloud clients.
</p>
</li>
<li>
<p>
<i>Backwards compatible</i>
</p>
<p>
Cloud client 16 introduces Cumulus support using the S3 library jets3t. In order to maintain backwards compatibility, it is still compatible with older clouds that use GridFTP.
</p>
<p>
The new, default repository behavior is triggered when "vws.repository.type=cumulus" is present in the cloud.properties file. This is a value that any Nimbus 2.5+ cloud should distribute in their cloud.properties file (start with the <a href="admin/z2c/index.html">Zero To Cloud Guide</a> to learn how to give out the right cloud.properties file).
</p>
</li>
<li>
<p>
Added support for using unencrypted keys directly instead of
needing to run proxy-init. See the README for details.
</p>
<p>
The properties 'nimbus.cert' and 'nimbus.key' are consulted
first, then normal a grid proxy search is made, then ~/.nimbus is
consulted, then ~/.globus.
</p>
<p>
This means one less user step in many of the common ways people use
the cloud client. The new (optional) 'nimbus.cert' and
'nimbus.key' properties also makes toggling between clouds easier.
</p>
</li>
<li>
<p>
Other enhancements can be viewed in the <a href="http://github.com/nimbusproject/nimbus/raw/master/cloud-client/nimbus-cloud-client-src/CHANGES.txt">cloud client changelog</a>.
</p>
</li>
</ul>
<i>2.5 - Additional Notes</i>
<ul>
<li>
<p>
<a href="http://bugzilla.mcs.anl.gov/globus/showdependencytree.cgi?id=7013&hide_resolved=0">Nimbus 2.5 release bug and enhancement tracker</a>
</p>
</li>
<li>
<p>
List of <a href="http://github.com/nimbusproject/nimbus/compare/nimbus-release-2.4...nimbus-release-2.5">all commits</a> between the <a href="http://github.com/nimbusproject/nimbus/tree/nimbus-release-2.4">nimbus-release-2.4</a> tag and the <a href="http://github.com/nimbusproject/nimbus/tree/nimbus-release-2.5">nimbus-release-2.5</a> tag in the Nimbus code repository.
</p>
</li>
<li>
<p>
For developers and community testers: <a href="changelog_supplement2.5.html">list of fixes to 2.5 release candidates</a> that are included in 2.5 final.
</p>
</li>
</ul>
<a name="2.4"> </a>
<br><hr><br>
<i>2.4 - Services</i>
<ul>
<li>
<p>
We are happy to announce the first version of the new Nimbus installer.
</p>
<p>
It helps make useful initial configurations, eliminates the
need for a separate Globus container installation, and sets up an
embedded certificate authority.
</p>
<p>
The installer also introduces the important new concept of "NIMBUS_HOME" on the services node.
</p>
<p>
Please read on
<a href="http://wiki.github.com/nimbusproject/nimbus/nimbus-24-installer">here</a>
for an overview of the changes and also the directions that we will
be taking the installer.
</p>
</li>
<li>
<p>
The build time was significantly cut, see
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=6990">this
enhancement bug</a> for more information.
</p>
</li>
<li>
<p>
GridFTP will now work with the Nimbus auto-certificate-authority
technology because a revocation file is now added when the CA is created. See
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=6552">this bug</a> for more information.
</p>
</li>
<li>
<p>
AutoContainer officially removed.
</p>
</li>
<li>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7006">
Enhancement 7006</a> - Updating servlet API to 2.5
</li>
<li>
<p>Assorted bugfixes:</p>
<ul>
<li><a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=6978">
Bug 6978 - Web app only listens on localhost
</a></li>
<li><a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=6996">
Bug 6996 - autoconfig.sh gives a bad suggestion.
</a></li>
<li>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=6833">
Bug 6833 - ClassCastException in ResourceSweeper</a>
</li>
<li>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7007">
Bug 7007 - destruction/schedule issues</a>
</li>
<li>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7009">
Bug 7009 - All scripts should use shebang /bin/bash</a>
</li>
<li>
<a href="https://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=6992">
Bug 6992 - Nimbus EC2 Query API limits HTTP headers to 4k</a>
</li>
<li>
<a href="https://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=6987">
Bug 6987 - Context broker fails on cluster document without &lt;requires&gt; element</a>
</li>
</ul>
</li>
</ul>
<i>2.4 - Control Agents</i>
<ul>
<li>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7008">
Enhancement 7008</a> - Allow workspace-control to start more types of images
</li>
<li>
<p>Assorted bugfixes:</p>
<ul>
<li><a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=6968">
Bug 6968 - mount-alter.sh breaks when sent a partition (rather than disk image) task
</a></li>
<li><a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=6962">
Bug 6962 - ebtables locking
</a></li>
<li>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7005">
Bug 7005 - multiple authorized kernels issue</a>
</li>
<li>
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=7009">
Bug 7009 - All scripts should use shebang /bin/bash</a>
</li>
</ul>
</li>
</ul>
<i>2.4 - Monitoring</i>
<ul>
<li>
<p>
The Nimbus Monitoring &amp; Discovery system has received a substantial overhaul since its first incarnation. The system still utilizes custom Nagios plugins to report worker node and head node resources. However, the Globus 4.x MDS utility is no longer used as the data registry. Instead, the system publishes XML to files to be served up by a web browser. A new utility, <a href="http://github.com/hep-gc/cloud-aggregator">Cloud Aggregator</a> has been developed separately to query these XML sources.
</p>
<p>
See this <a href="http://github.com/ahbishop/nimbus/blob/master/monitoring/nagios/ChangeLogSummary">extended
summary</a> as well as the full monitoring
<a href="http://github.com/ahbishop/nimbus/blob/master/monitoring/nagios/ChangeLog">changelog</a>.
</p>
</li>
</ul>
<i>2.4 - Cloud client</i>
<ul>
<li>
<p>
The current cloud client as of this release is cloud-client-014. This service
release should also work with cloud clients 011 through 013.
</p>
<p>
For cloud client changes, see
<a href="http://github.com/nimbusproject/nimbus/raw/master/cloud-client/nimbus-cloud-client-src/CHANGES.txt">here.</a>
</li>
</ul>
<a name="2.3"> </a>
<br><hr><br>
<i>2.3 - Summary</i>
<ul>
<li>
<p>
Support for the EC2 Query API.
</p>
</li>
<li>
<p>
Introduction of administrative web portal interface. Supports
securely distributing user credentials.
</p>
</li>
<li>
<p>
Refactored workspace-control and integrated with
<a href="http://libvirt.org/">libvirt</a>. Includes initial
support for the
<a href="http://www.linux-kvm.org">KVM</a> hypervisor.
</p>
</li>
<li>
<p>
Assorted bug fixes and minor enhancements.
</p>
</li>
</ul>
<i>2.3 - Services</i>
<ul>
<li>
<p>
Support for the EC2 Query API. Tested with the Python
<a href="http://code.google.com/p/boto/">boto</a> client but should
work with others. The service does not run in the standard Globus container,
it spawns a separate Jetty process. While installed by default, it requires
configuration before it can be used.
</p>
</li>
<li>
<p>
EC2 SOAP API support has been upgraded to version 2009-08-15. This means
ec2-api-tools clients must be upgraded to
<a href="http://s3.amazonaws.com/ec2-downloads/ec2-api-tools-1.3-42584.zip">
this version</a>. Early work has been done to support multiple versions
concurrently, but this functionality is not yet available.
</p>
</li>
<li>
<p>
The new Nimbus web portal is based on Django and is a standalone
component with (in this version) no ties to the other Nimbus services. This
component's current sole functionality to facilitate securely
providing users with their X509 and query credentials. It will be expanded in future releases to include more functionality for both users and administrators.
</p>
</li>
<li>
<p>
The Context Broker has been refactored and merged into the main Nimbus
source tree. It is installed by default but is not enabled because it
needs configuration.
</p>
</li>
<li>
<p>
The Nimbus Derby configuration now supports network access, though it
is disabled by default. At install, passwords are generated and stored
in <i>var/derby.properties</i>. The Nimbus service still uses the
embedded interface. See
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=6516">Bug 6516</a>
for details.
</p>
</li>
<li>
<p>Assorted bugfixes:</p>
<ul>
<li><a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=6631">
Bug 6631 - inconsistency with resource reservations
</a></li>
<li><a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=6633">
Bug 6633 - backwards compatibility with context broker is broken
</a></li>
<li><a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=6735">
Bug 6735 - blankspace regression
</a></li>
<li><a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=6856">
Bug 6856 - user data is broken
</a></li>
<li><a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=6869">
Bug 6869 - blankspace creation regression
</a></li>
</ul>
</li>
</ul>
<i>2.3 - Control Agents</i>
<ul>
<li>
<p>
The workspace-control component has been significantly refactored.
It has been moved from <i>backend/</i> to <i>control/</i> in the
source tree.
Direct command-line invocation of Xen operations has been replaced with
calls to the excellent <a href="http://libvirt.org/">libvirt</a>
library. This opens the door to easier integration with several other
hypervisors, starting with KVM.
</p>
</li>
<li>
<p>
Initial <a href="http://kvm.qumranet.com/kvmwiki">KVM</a> support is
provided.
</p>
</li>
<li>
<p>Assorted bugfixes:</p>
<ul>
<li><a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=6868">
Bug 6868 - in non-cloud configurations, .gz support is broken
</a></li>
</ul>
</li>
</ul>
<i>2.3 - Cloud client</i>
<ul>
<li>
<p>
The current cloud client as of this release is cloud-client-014. This service
release should also work with cloud clients 011 through 013.
</p>
<p>
For cloud client changes, see
<a href="http://github.com/nimbusproject/nimbus/raw/master/cloud-client/nimbus-cloud-client-src/CHANGES.txt">here.</a>
</li>
</ul>
<a name="TP2.2"> </a>
<br><hr><br>
<i>2.2 - Summary</i>
<ul>
<li>
<p>
Introduction of the metadata server which mimics the EC2 HTTP
query based metadata server.
</p>
</li>
<li>
<p>
Introduction of a standalone context broker, see the downloads
page. This runs by itself so that you can use just the context
broker to contextualize virtual clusters on EC2. No Nimbus cluster
is necessary.
</p>
</li>
<li>
<p>
Bug fixes, see below for specifics.
</p>
</li>
</ul>
<i>2.2 - Services</i>
<ul>
<li>
<p>
Added a metadata server which responds to VMs HTTP queries, using
the same path names as the EC2 metadata server. The URL for this
is obtained by looking at <i>/var/nimbus-metadata-server-url</i>
on the VM, which is an optional VM customization that can be made.
See <i>"etc/nimbus/workspace-service/metadata.conf"</i> for the
details.
</p>
<p>
It responds based on source IP address so there is an
assumption that the immediately local network is non-spoofable.
</p>
<p>
The metadata server is disabled by default.
</p>
</li>
<li>
<p>
Introduction of a standalone context broker, see the downloads
page. This runs by itself so that you can use just the context
broker to contextualize virtual clusters on EC2. No Nimbus cluster
is necessary.
</p>
</li>
<li>
<p>
Added user-data support to EC2 remote interfaces.
</p>
</li>
<li>
<p>
Added user-data support to the WSRF operations, but namespaces did
not change. This maintains client forward compatibility. If the
user data element is missing, that is not an issue for the service.
</p>
</li>
<li>
<p>
Added getGlobalAll to the RM API, see enhancement request
<a href="http://bugzilla.globus.org/bugzilla/show_bug.cgi?id=6556">6556</a>
</p>
</li>
<li>
<p>
Added <i>MetadataServer</i> module and user-data to <i>VM</i> to
the RM API.
</p>
</li>
<li>
<p>
Added user-data support to EC2 remote interfaces.
</p>
</li>
<li>
<p>
Fixed these EC2 interface bugs:
<a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=6530">wrong
instance ID is returned</a> and
<a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=6537">describe
instances fails with parameter</a>.
</p>
</li>
<li>
<p>
Fixed misc bugs
<a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=6546">6546</a>
and <a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=6545">6545
(pilot plugin initialization failure)</a>.
</p>
</li>
</ul>
<i>2.2 - Cloud client</i>
<ul>
<li>
<p>
Current cloud client as of this release is cloud-client-011. This
supports contextualization using the new standalone context broker.
</p>
</li>
<li>
<p>
A lone invocation of "--status" (which prints all your currently
running instances) will now print the associated cloud handle
of each workspace.
</p>
</li>
<li>
<p>
Java 1.5 (Java 5) is now a requirement
</p>
</li>
<li>
<p>
The TP2.2 service side is backwards compatible with the "old
style" contextualization but this cloud client only supports
the new one. <i>You can only use this against Nimbus TP2.1
installations if you are not using contextualization</i>.
</p>
</li>
<li>
<p>
Support for contextualizing easily with EC2 resources. See the
output of "--extrahelp" for the new "--ec2script" option. Sample
EC2 cluster.xml file is @ "samples/ec2basecluster.xml"
</p>
<p>
This will take care of the context broker interactions for you and
give you a suggested set of EC2 commands to run (including files
for metadata) for the virtual cluster to contextualize while running
on EC2.
</p>
</li>
<li>
<p>
Fixed bug in the "lib/this-globus-environment.sh" script, the
X509_CERT_DIR variable was being set incorrectly
</p>
</li>
</ul>
<i>2.2 - Context agent</i>
<ul>
<li>
<p>
A new version of the context agent is necessary to contextualize a
virtual cluster with Nimbus TP2.2's metadata server and the new
context broker.
</p>
</li>
</ul>
<a name="TP2.1"> </a>
<br><hr><br>
<i>2.1 - Summary</i>
<ul>
<li>
<p>
Introduction of an auto-configuration program which
guides you through many of the initial configuration steps and
run several validity tests.
</p>
</li>
<li>
<p>
Introduction of the Nimbus AutoContainer program which
allows you to set up a Globus Java web services environment from
scratch (including security) in less than a minute.
</p>
</li>
<li>
<p>
Introduction of the <i>cloud-admin</i> program which allows you
to very easily manage new users in a cloud configuration.
</p>
</li>
<li>
<p>
No protocol changes to WSRF based messaging. Previous
clients such as cloud-client-010 are compatible.
</p>
</li>
<li>
<p>
Protocol update to match the current Amazon EC2 deployment,
see below for details.
</p>
</li>
<li>
<p>
New workspace-control configurations options to support more
kinds of deployments, see below for details.
</p>
</li>
<li>
<p>
New service requirement: Java JDK5+ (<i>aka</i> Java 1.5+)
</p>
</li>
<li>
<p>
Updated documentation.
Added an <a href="plugins/index.html">extensibility guide</a>
and <a href="admin/upgrading.html">upgrade guide</a>.
</p>
</li>
<li>
<p>
Bug fixes, see below for specifics.
</p>
</li>
</ul>
<i>2.1 - Services</i>
<ul>
<li>
<p>
Introduction of an auto-configuration program which will
guide you through many of the initial configuration steps and
run several validity tests.
</p>
<p>
See <a href="admin/quickstart.html#part-IIb">this section</a> of
the administrator quickstart for more information.
</p>
</li>
<li>
<p>
Introduction of the Nimbus AutoContainer program which will
allow you to set up a Globus Java web services environment from
scratch (including security) in less than a minute.
</p>
<p>
It requires a separate download.
See <a href="admin/quickstart.html#auto-container">this section</a>
of the administrator quickstart for more information.
</p>
</li>
<li>
<p>
Introduction of the "cloud-admin" program which will allow you
to very easily manage new users in a
<a href="doc/cloud.html">cloud</a> configuration.
</p>
<p>
It is installed at the same time as the auto-configuration program,
installed as
<i>$GLOBUS_LOCATION/share/nimbus-autoconfig/cloud-admin.sh</i>,
see <a href="doc/cloud.html#cloud-admin">this section</a>
of the cloud guide for more information
</p>
</li>
<li>
<p>
Protocol update to match the current Amazon EC2 deployment:
</p>
<p>
Nimbus TP2.1 supports the <i>2008-05-05</i> WSDL
(used by
<a href="http://s3.amazonaws.com/ec2-downloads/ec2-api-tools-1.3-24159.zip">this
EC2 client</a>) as opposed to Nimbus TP2.0 which supported
the <i>2008-02-01</i> WSDL
(used by
<a href="http://s3.amazonaws.com/ec2-downloads/ec2-api-tools-1.3-19403.zip">this
EC2 client</a>).
</p>
</li>
<li>
<p>
New service requirement: Java JDK5+ (<i>aka</i> Java 1.5+)
</p>
</li>
<li>
<p>
Resolved bug
<a href="http://bugzilla.globus.org/bugzilla/show_bug.cgi?id=6390">6390</a>:
"notifications script is not sh compliant"
</p>
<p>
The notification scripts now directly use the intended "bash" shell.
</p>
</li>
<li>
<p>
Resolved bug
<a href="http://bugzilla.globus.org/bugzilla/show_bug.cgi?id=6474">6474</a>:
"destruction callbacks were not registered"
</p>
<p>
An internal problem was fixed which made the logs wrong as well
as causing problems for the client at destroy time. In particular,
a VM would be destroyed but the remote client would not hear the
last notification of the event causing it to hang.
</p>
</li>
<li>
<p>
Resolved bug
<a href="http://bugzilla.globus.org/bugzilla/show_bug.cgi?id=6397">6397</a>:
"reservation ID mapping verification wrong for single-VM reservations"
</p>
<p>
The EC2 reservation emulation is now working correctly with single
VMs.
</p>
</li>
<li>
<p>
Resolved bug
<a href="http://bugzilla.globus.org/bugzilla/show_bug.cgi?id=6475">6475</a>:
"repository + scp propagation"
</p>
<p>
The EC2 messaging system now works with setups that use
SCP propagation, there is a new relevant
configuration in the <i>elastic.conf</i> file.
</p>
</li>
<li>
<p>
Resolved miscellaneous/cosmetic bugs
<a href="http://bugzilla.globus.org/bugzilla/show_bug.cgi?id=6393">6393</a>,
<a href="http://bugzilla.globus.org/bugzilla/show_bug.cgi?id=6394">6394</a>,
<a href="http://bugzilla.globus.org/bugzilla/show_bug.cgi?id=6396">6396</a>,
<a href="http://bugzilla.globus.org/bugzilla/show_bug.cgi?id=6398">6398</a>, and
<a href="http://bugzilla.globus.org/bugzilla/show_bug.cgi?id=6416">6416</a>.
</p>
</li>
</ul>
<i>2.1 - Reference clients</i>
<ul>
<li>
<p>
Cloud and reference clients did not change. Current cloud client
as of this release is cloud-client-010.
</p>
</li>
<li>
<p>
You will need to update any EC2 client you use with Nimbus:
</p>
<p>
Nimbus TP2.1 supports the <i>2008-05-05</i> WSDL
(used by
<a href="http://s3.amazonaws.com/ec2-downloads/ec2-api-tools-1.3-24159.zip">this
EC2 client</a>) as opposed to Nimbus TP2.0 which supported
the <i>2008-02-01</i> WSDL
(used by
<a href="http://s3.amazonaws.com/ec2-downloads/ec2-api-tools-1.3-19403.zip">this
EC2 client</a>).
</p>
</li>
</ul>
<i>2.1 - Control agents</i>
<ul>
<li>
<p>
Added a new option to create VMs with "tap:aio" instead of using
the "file" method (these are Xen terms for methods of mounting
the disks). The "tap:aio" method is often
used in Xen 3.2 setups and is now possible to use via
workspace-control. See the new <i>worksp.conf.sample</i>.
</p>
</li>
<li>
<p>
Resolved enhancement request
<a href="http://bugzilla.globus.org/bugzilla/show_bug.cgi?id=6326">6326</a>:
"use matching initrd with kernel"
</p>
<p>
This allows you to configure workspace-control to take the kernel
filename it is launching a VM with and search for a matching
initrd based on suffix rules you set up. This allows you to
easily use many of the Xen guest kernels that are created with popular
Linux distributions.
</p>
</li>
</ul>
<a name="TP2.0"> </a>
<br><hr><br>
<i>2.0 - Summary</i>
<ul>
<li>
<p>
Introduction of the FAQ which explains
many things you may already know, but it also includes new descriptions
of the component system now more clearly articulated in the Nimbus
TP2.0 release.
</p>
</li>
<li>
<p>
Introduction of the Java
RM API which is
a bridge between protocols and resource management implementations.
The resource managers can remain protocol/framework/security agnostic
(they can be "pure Java") and various protocol implementations
can be implemented independently (and even simultaneously). Runtime
orchestration of implementation choices is directed by industry
standard
<a href="http://www.springframework.org/">Spring</a> dependency
injection.
</p>
</li>
<li>
<p>
Introduction of an alternative remote protocol implementation based
on Amazon <a href="http://aws.amazon.com/ec2">EC2</a>'s WSDL interface
description. It is only a partial implementation (see below).
It can be used simultaneously alongside the WSRF based protocols.
</p>
</li>
<li>
<p>
More friendly configuration mechanism for administrators including
area-specific ".conf" files instead of any XML and the addition of
some helper scripts.
</p>
</li>
<li>
<p>
No protocol changes (only an additional remote protocol). Previous
clients such as cloud-client-009 are compatible.
</p>
</li>
</ul>
<i>2.0 - Services</i>
<ul>
<li>
<p>
Introduction of the Java
RM API which is
a bridge between protocols and resource management work. The
resource managers below can remain protocol/framework agnostic
(they can be "pure Java") and various protocol implementations
can be implemented independently. Runtime directions of choices
is directed by <a href="http://www.springframework.org/">Spring</a>
dependency injection.
</p>
</li>
<li>
<p>
Introduction of an alternative remote protocol implementation based
on Amazon EC2's WSDL interface description
(namespace <i>http://ec2.amazonaws.com/doc/2008-02-01/</i>)
</p>
<p>
It can be used simultaneously alongside the previous remote
interfaces. If the EC2 protocol layer does not recognize instance
identifiers being reported by the underling resource manager
(for example when gathering "describe-instances" results), it
will create new, unique instance and reservation IDs on the fly for
them.
</p>
<p>
It is only a partial protocol implementation, the operations behind
these EC2 commandline clients are currently provided:
</p>
<ul>
<li>
<p>
ec2-describe-images - See what images in your personal cloud
directory you can run.
</p>
</li>
<li>
<p>
ec2-run-instances - Run images that are in your personal cloud
directory.
</p>
</li>
<li>
<p>
ec2-describe-instances - Report on currently running instances.
</p>
</li>
<li>
<p>
ec2-terminate-instances - Destroy currently running instances.
</p>
</li>
<li>
<p>
ec2-reboot-instances - Reboot currently running instances.
</p>
</li>
<li>
<p>
ec2-add-keypair [*] - Add personal SSH public key that can be
installed for root SSH logins
</p>
</li>
<li>
<p>
ec2-delete-keypair - Delete keypair mapping.
</p>
</li>
</ul>
<p>
[*] - One of two add-keypair implementations can be chosen by
the administrator.
</p>
<ul>
<li>
<p>
One is the normal implementation where the
server-side generates a private and public key (using
<a href="http://www.jcraft.com/jsch/">jsch</a>) and delivers
the private key to you.
</p>
</li>
<li>
<p>
The other (configured by default) is a break from the
regular semantics. It allows the keypair "name" you
send in the request to be the name AND the public key value.
This means there is never a private key server-side and
also that you can keys you aready have on your system.
</p>
</li>
</ul>
</li>
<li>
<p>
More friendly configuration mechanism for administrators including
area-specific ".conf" files (instead of XML) and the addition of
some helper scripts.
</p>
<p>
If you are familiar with a previous Nimbus versions (VWS), these
".conf" files hold anything found in the old "jndi-config.xml" file
which you don't need to look at anymore.
The files hold name=value pairs with surrounding comments. They
are organized by area: accounting.conf, global-policies.conf,
logging.conf, pilot.conf, network.conf, ssh.conf, vmm.conf.
</p>
</li>
<li>
<p>
Service configurations are now in "etc/nimbus/workspace-service" and
"etc/nimbus/elastic". Advanced configurations (which you should
not need to alter normally are now in
"etc/nimbus/workspace-service/other" and "etc/nimbus/elastic/other".
</p>
</li>
<li>
<p>
New persistence management wrapper scripts are in "share/nimbus"
and the persistence directory has moved to "var/nimbus"
</p>
</li>
<li>
<p>
Support for site-to-site file management (staging) was removed.
</p>
</li>
<li>
<p>
Developers: Significant directory reworkings (and subsequent build
file changes) to organize modules more coherently, allowing for
easier module independence.
</p>
<p>
Build system now clearly separates anything to do with the target
deployment (only one target deployment at the moment, GT4.0.x).
</p>
</li>
<li>
<p>
New Java dependencies:
</p>
<ul>
<li>
<a href="http://www.springframework.org/">Spring</a> - just the
core dependency injection library. The
RM API
depends on Spring import statements but no other module has any
direct coupling to it.
</li>
<li>
<a href="http://cglib.sourceforge.net/">cglib</a> - used
"invisibly" alongside Spring to provide some limited code
generation when convenient.
</li>
<li>
<a href="http://ehcache.sourceforge.net/">ehcache</a> - used
for in-memory object caching.
</li>
<li>
<a href="http://jug.safehaus.org/">jug</a> - used for UUID
generation instead of needing an axis dependency.
</li>
<li>
<a href="http://www.jcraft.com/jsch/">jsch</a> - used for
SSH keypair generation if necessary (see [*] in the EC2
section).
</li>
</ul>
</li>
</ul>
<i>2.0 - Reference clients</i>
<ul>
<li>
<p>
The clients have stayed the same (on purpose, to reduce too much
changing) except for some library package name changes.
</p>
</li>
<li>
<p>
When using a cloud running the EC2 front end implementation, you
can download this
<a href="http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip">EC2
client</a> from Amazon or try a number of different client that are
<a href="http://www.google.com/search?hl=en&q=ec2%20client">out there</a>.
</p>
</li>
</ul>
<i>2.0 - Control agents</i>
<ul>
<li>
<p>
Workspace-control has stayed the same (on purpose, to reduce too
much changing).
</p>
</li>
</ul>
<i>2.0 - Workspace pilot system</i>
<ul>
<li>
<p>
No changes except that the server side configuration location
has moved from the "jndi-config.xml" file to "pilot.conf"
</p>
</li>
</ul>
<a name="TP1.3.3"> </a>
<br><hr><br>
<i>1.3.3.1 - Summary</i>
<ul>
<li>
<p>
Introduction of support for contextualization with virtual
clusters. See the <a href="clouds/">clouds page</a> and the new
<a href="clouds/clusters.html">one-click clusters</a> page to see
the various new features in action.
</p>
</li>
<li>
<p>
New ensemble service report operation allows efficient queries
about a large number of workspaces.
</p>
</li>
<li>
<p>
Support for storing images at the repository in gzip format and
retrieving them from the repository in gzip format. This can
save a lot of time in cluster situations.
</p>
</li>
<li>
<p>
Support for pegging the number of vcpus clients receive.
</p>
</li>
<li>
<p>
Various client enhancements including internal organization,
cleaner output, and new commandline options. Embedded security
tools (like grid-proxy-init) work more out of the box now.
</p>
</li>
<li>
<p>
No configuration migrations are necessary for moving to this
version from TP1.3.2. Some configuration additions will be
necessary if you'd like to take advantage of features.
</p>
</li>
<li>
<p>
There was a WSDL update: additions, changes and new namespaces.
The base namespace for workspace schemas is now
<i>http://www.globus.org/2008/06/workspace/</i>
</p>
</li>
<li>
<p>
Some bug fixes.
</p>
</li>
</ul>
<i>1.3.3.1 - Services</i>
<ul>
<li>
<p>
Integration with context broker.
</p>
</li>
<li>
<p>
New ensemble service report operation allows efficient queries
about a large number of workspaces. Can retrieve status and
error messages about entire ensemble at once.
</p>
</li>
<li>
<p>
Fixed scheduler backout to correctly handle situation where
ensemble wasn't launched yet but ensemble-destroy was invoked.
</p>
</li>
<li>
<p>
Fixed bug where IP address updates were not passing through cache
layer to DB correctly causing a possible inconsistency if container
restarted in certain circumstances. <b>NOTE:</b> <i>this bugfix
was not present in TP1.3.3 but is present in TP1.3.3.1</i>.
</p>
</li>
<li>
<p>
Various internal changes (see CVS log)
</p>
</li>
<li>
<p>
No configuration changes are necessary for moving to this version
from TP1.3.2. But to enable the context broker, you need to
configure paths to a credential for it in the jndi-config file
and make sure the WSDD file lists the context broker as in the
source file "deploy-server.wsdd" (which becomes server-config.wsdd)
</p>
</li>
</ul>
<i>1.3.3.1 - Reference clients</i>
<ul>
<li>
<p>
Added cloud-client cluster and contextualization support. Includes
new "--cluster" flag (see cloud-client CHANGES.txt for full changes
there).
</p>
<p>
See the <a href="clouds/">clouds page</a> and the new
<a href="clouds/clusters.html">clusters</a> page.
</p>
</li>
<li>
<p>
The regular commandline client has new flags for ensemble
and context broker support. See "-h" output.
</p>
</li>
</ul>
<i>1.3.3.1 - Control Agents</i>
<ul>
<li>
<p>
Support for gzip via filename-sense. See cloud
<a href="clouds/clusters.html#compression">notes</a> on image
compression/decompression. This can save a lot of time in cluster
launch situations since the gzip/gunzip happens on the VMMs
simultaneously, cutting transfer times (where there is contention)
considerably.
</p>
</li>
<li>
<p>
Local-locked the control of dhcpd start and stop: now works for
situations where multiple workspaces are deployed on a VMM
simultaneously (such as one VM per core and launching as part of
a cluster). The DHCP adjustment was being excercised
simultaneously, revealing the race.
</p>
</li>
<li>
<p>
There is no need to change the workspace-control configuration
file from a TP1.3.2 compatible one. There is a new configuration
if you want to use it, though. The "[behavior] --> num_cpu_per_vm"
configuration allows you to peg the number of vcpus that are
assigned to every workspace.
</p>
<p>
You can choose to not upgrade workspace-control at all if you don't
want the features listed here.
</p>
</li>
</ul>
<a name="TP1.3.2"> </a>
<br><hr><br>
<i>1.3.2 - Summary</i>
<ul>
<li>
<p>
Introduction of the cloud configuration and cloud client for user
friendly client access to the workspace service.
</p>
</li>
<li>
<p>
Introduction of the "groupauthz" authorization plugin for typical
configurations including the cloud setup.
</p>
</li>
<li>
<p>
Clients may now send customization tasks with request, files on the
image will be replaced with the content. The cloud client, for
example, is set up by default to send a customization request that
sets up the workspace's "/root/.ssh/authorized_keys" file.
</p>
</li>
<li>
<p>
Clients can request an alternate unpropagation target to save a
template VM into a new personal copy. This new URL may be requested
both at creation time and on the fly in a unpropagate request.
</p>
</li>
<li>
<p>
Centralization of MAC address allocations to the central workspace
service. This allows all backend configurations files to be
identical. Older/advanced configurations are still possible but
not recommended unless necessary.
</p>
</li>
<li>
<p>
Hard disk images are now supported (client may bring a matching
kernel along).
</p>
</li>
<li>
<p>
Various client enhancements including internal organization,
cleaner output, and new commandline options.
</p>
</li>
<li>
<p>
A few bug fixes.
</p>
</li>
<li>
<p>
There was a WSDL update: additions, changes and new namespaces.
The base namespace for workspace schemas is now
<i>http://www.globus.org/2008/03/workspace/</i>
</p>
</li>
</ul>
<i>1.3.2 - Services</i>
<ul>
<li>
<p>
See the <a href="doc/cloud.html">Cloud Guide</a> for an overview
of a new set of configurations/conventions that allow for clients
to get up and running in minutes even from laptops on NATs.
Currently this comes at the cost of obscuring some features like
group deployments and multiple NICs.
</p>
</li>
<li>
<p>
Centralized MAC address allocations to the workspace service. This
allows all backend configurations files to be identical.
Older/advanced configurations are still possible but not recommended
unless necessary.
</p>
<p>
There is a new configuration in the <i>jndi-config.xml</i> file that
allows the administrator to define the valid prefix for MAC
address selection. See <i>WorkspaceFactoryService</i> ->
<i>NetworkAdapter</i> -> <i>macPrefix</i>
</p>
<p>
Once an IP is assigned a MAC address (during service initialization)
it remains with that IP as long as it is configured as part of the
network pools. This ensures that local network devices can cache
MAC/IP bindings without needing to be manually cleared (no need for
unsolicited ARP reply to guarantee connectivity).
</p>
</li>
<li>
<p>
Introduction of the "groupauthz" plugin. This comes directly with
the workspace service (no separate plugin installation is
necessary) but it is not enabled by default. This authorization
plugin supports different policies for different group members
which you organize by inserting identities into different group
files.
</p>
<p>
The plugin can enforce the following policies. The request data
to check is determined on a per-request, per-client basis.
The <b>limits</b> are defined on a per group basis (every caller
identity must be a part of a group).
</p>
<ul>
<li>
Maximum currently reserved minutes at one point in time. If the
caller has two other workspaces with 10 hours scheduled for each,
the value being checked against this policy would be 20 hours
plus whatever time the current request is.
</li>
<li>
Maximum elapsed and currently reserved minutes at one point in
time. If the caller has one other workspace with 10 hours
scheduled and 80 hours of recorded past usage, the value being
checked against this policy would be 90 hours plus whatever time
the current request is. This is the all-time maximum usage cap.
</li>
<li>
Maximum number of running workspaces at one point in time.
</li>
<li>
Maximum number of workspaces per request (the largest group
request possible).
</li>
<li>
The image node that must be specified.
</li>
<li>
The image node base directory that must be specified.
</li>
<li>
Support for identity-hash based image subdirectories
(see the cloud setup documentation to understand this
convention).
</li>
</ul>
<p>
Each policy can be set to disabled/infinite for specific groups
if you desire.
</p>
</li>
<li>
<p>
Arbitrary file customization tasks may be sent with the workspace
creation request. The image is mounted on the VMM and the contents
of the task are placed into the specified file.
</p>
<p>
This requires <i>mount-alter.sh</i> support on the backend which
expects the <i>mount -o loop</i> construct to work without specific
filesystem selection. i.e., this will not support workspaces with
filesystems that the VMM kernels do not support.
</p>
<p>
This requires three new <i>jndi-config.xml</i> configurations:
</p>
<ul>
<li><i>WorkspaceService</i> -> <i>home</i> -> <i>localTempDirectory</i></li>
<li><i>WorkspaceService</i> -> <i>home</i> -> <i>scpPath</i></li>
<li><i>WorkspaceService</i> -> <i>home</i> -> <i>backendTempDirectory</i></li>
</ul>
</li>
<li>
<p>
Inclusion of alternate unpropagation URL. This allows the client
to specify the target URL for where the workspace is unpropagated.
It can be specified as part of the creation request or overriden
after deployment. If the default shutdown mechanism was to destroy
the workspace, this can still be used (with shutdown-save) to cause
unpropagation to the given URL.
</p>
</li>
<li>
<p>
Authorization enhancement to support late-specified alternate
unpropagation URL. An operation to check the contents of a
post-deployment alternate propagation URL request was added to the
authorization callout interface.
</p>
<p>
This can be used to filter out invalid requests. For
example, the groupauthz plugin discussed above will use the same
logic here for image repository policy checking that it does at
create time. Previously, the authorization callout had only one
operation which was called at creation time only.
</p>
</li>
<li>
<p>
Fault information can now be stored as part of the Corrupted state
(for both RP queries and asynchronous state notifications). This
will help the remote client debug issues that can arise after a
successful factory creation, such as "the file you specified to
propagate does not exist at the image repository" etc.
</p>
</li>
<li>
<p>
Various internal changes (see CVS log)
</p>
</li>
<li>
<p>
See the end of the administrator guide for notes on configuration
migration to this version from older workspace releases.
</p>
</li>
</ul>
<i>1.3.2 - Reference clients</i>
<ul>
<li>
<p>
Introduction of cloud-client system. This consists of a wrapper
program run from a specific directory setup that contains an
embedded globus client installation among other things.
</p>
<p>
For more information on the client and setting up a configuration
to support it, see the <a href="doc/cloud.html">Cloud Guide</a>.
To see some examples of end-user commands, see the
<a href="clouds/">clouds</a> page.
</p>
</li>
<li>
<p>
The main client's help system was reorganized. For help on options
that are specific to an action, use "--help --<i>&lt;name of
action&gt;</i>". See the main "--help" output to get started.
</p>
</li>
<li>
<p>
The main client has a new "--exit-state" option that causes
modes with subscriptions (in either poll or async mode) to wait
for the specified state before exiting with success. If the
workspace moves to a terminal state (Corrupted etc.) then this
is considered an error. This is aimed at making scripts that
wrap the client more effective.
</p>
</li>
<li>
<p>
The main client has a new "--save-target" option whose argument is
an override to any previous unpropagation URL. You can use this
before or after deployment has succeeded (although it could fail
because of authorization issues). See the client's
"-h --shutdown-save" output for more information.
</p>
</li>
<li>
<p>
Arbitrary customization tasks are possible by defining them in an
optional parameters file. But the main client now also includes a
shortcut for the very common task of inserting your SSH public key
as the desired contents of the <i>/root/.ssh/authorized_keys</i>
file on the VM. See the client's "-h --deploy" output for more
information on this new "--sshfile" option.
</p>
</li>
<li>
<p>
Support for post-deployment error printing (faults can now be
included as part of Corrupted notifications).
</p>
</li>
<li>
<p>
Status client allows for a bulk query ("in one remote operation,
show me a short update of all workspaces I manage at this service").
</p>
</li>
<li>
<p>
Introduction of a base client API which abstracts operations out
from the webservices implementation and provides common subscription
tools, utility methods, etc. (the main workspace client was
internally reorganized to use this API: if you are a client
developer you could examine this code for a lot of concrete usage
samples).
</p>
</li>
</ul>
<i>1.3.2 - Control Agents</i>
<ul>
<li>
<p>
(re-)inclusion of mount-alter for file customization tasks. Using
this requires an additional sudo rule.
</p>
</li>
<li>
<p>
Fix for a bug where certain NIC bridging problems with a workspace
that had more than one NIC would not trip a backout.
</p>
</li>
<li>
<p>
Fix for a bug where the lack of a gateway specification would cause
a problem when inserting a workspace's DHCP policy. Lack of a
default gateway is legal (and sometimes necessary).
</p>
</li>
<li>
<p>
When DHCP configuration file cannot be found, a more helpful error
is printed.
</p>
</li>
<li>
<p>
Files on VMM were not being deleted in one unpropagate situation
where they should have been.
</p>
</li>
<li>
<p>
The VM name prefix sent to the VMM has been shortened from
"workspace" to "wrksp". String length limits for NIC names were
being reached too early ("wrksp" should accomodate workspace IDs in
the millions).
</p>
</li>
<li>
<p>
We are including a "foreign-subnet" script that allows VMMs to
deliver IP information over DHCP to workspaces even if the VMM
itself does not have a presence on the target IP's subnet. This
is an advanced configuration, you should read through the script's
leading comments and make sure to clear up any questions before
using.
</p>
<p>
This is particularly useful for hosting workspaces with public IPs
where the VMMs themselves do not have public IPs. This is because
it does not require a unique interface alias for each VMM (public
IPs are often scarce resources).
</p>
</li>
<li>
<p>
Added support for booting hard disk images (pygrub). Resolves
enhancement request
<a href="http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=5423">#5423</a>.
Client must specify mountpoint like "hda" instead of "hda1" for
this to trigger.
</p>
</li>
<li>
<p>
See the end of the administrator guide for notes on configuration
migration to this version from older workspace releases.
</p>
</li>
</ul>
<i>1.3.2 - Workspace pilot program</i>
<ul>
<li>
<p>
In some situations the sleep() system call that the pilot makes
during an unexpected backout situation was returning too early.
This syscall been replaced by an alternate implementation that will
not fail in those situations.
</p>
</li>
</ul>
<a name="TP1.3.1"> </a>
<br><hr><br>
<i>1.3.1 - Summary</i>
<ul>
<li>
<p>
Added support for workspace pilot resource management. The pilot
is a program the service will submit to a local site resource
manager in order to obtain time on the VMM nodes. When not
allocated to the workspace service, these nodes will be used for
jobs as normal (the jobs run in normal system accounts in Xen
domain 0 with no guest VMs running). See below.
</p>
</li>
<li>
<p>
Added functionality to ensure multiple workspaces (including groups
of workspaces) are co-scheduled. See below.
</p>
</li>
<li>
<p>
Various client enhancements including ensemble service support,
cleaner output, and new commandline options.
</p>
</li>
<li>
<p>
Various bug fixes.
</p>
</li>
<li>
<p>
There was a WSDL update: additions, changes and new namespaces.
</p>
</li>
</ul>
<i>1.3.1 - Services</i>
<ul>
<li>
<p>
Added support for workspace pilot resource management. The pilot
is a program the service will submit to a local site resource
manager in order to obtain time on the VMM nodes. When not
allocated to the workspace service, these nodes will be used for
jobs as normal (the jobs run in normal system accounts in Xen
domain 0 with no guest VMs running).
</p>
<p>
Several extra safeguards have been added to make sure the node is
returned from VM hosting mode at the proper time, including support
for:
<ul>
<li>the workspace service being down or malfunctioning</li>
<li>LRM preemption (including deliberate LRM job cancellation)</li>
<li>node reboot/shutdown</li>
</ul>
</p>
<p>
Also included is a one-command "kill 9" facility for administrators
as a "worst case scenario" contingency.
</p>
<p>
Using the pilot is optional. By default the service does not
operate with it, the service instead directly manages the nodes it
is configured to manage.
</p>
</li>
<li>
<p>
Added functionality to ensure multiple workspaces (including groups
of workspaces) are co-scheduled. This includes the introduction
of the Workspace Ensemble Service. This functionality allows
complex virtual clusters to have all its component workspaces be
scheduled to run at once if that is necessary. This works with
both the default and pilot-based resource managers.
</p>
</li>
<li>
<p>
All remote interfaces (WSDLs/schemas) have been updated with at
least new namespaces. You can examine them directly online at the
WSDL and XSD files page
(or read the descriptions on the
Interfaces section). The main
difference is an extension to the factory create/deploy operation
and the addition of the ensemble service.
</p>
</li>
<li>
<p>
SSH based workspace-control invocations may now be configured
with an alternate private key.
</p>
</li>
<li>
<p>
SSH based workspace-control invocations now use options to ensure
easier identification of misconfigurations (no password entry
hang is possible now).
</p>
</li>
<li>
<p>
If using the pilot mechanisms, a new configuration section in the
service configuration file needs to be uncommented for pilot
specific configurations (see the configuration comments there).
</p>
</li>
<li>
<p>
If using the pilot mechanisms, a client may now not submit a flag
to the factory that requests the workspace be unpropagated after
the running time has elapsed. Instead, unpropagation must be
triggered manually by a client before this deadline is reached.
</p>
</li>
<li>
<p>
If using the pilot mechanisms, a shared secret must be configured
in <i>etc/workspace_service/pilot/users.properties</i> for HTTP
digest access authentication based notifications from the pilot.
Use the included <i>shared-secret-suggestion.py</i> script.
(alternatively SSH may be used for notifications but it is slower)
</p>
</li>
<li>
<p>
New dependencies (these are distributed with the service):
<ul>
<li>
<i><a href="http://backport-jsr166.sourceforge.net/">backport-util-concurrent</a></i>
</li>
<li>
<i><a href="http://jetty.mortbay.org/">jetty</a></i>
- only necessary if using the pilot with the faster, default
HTTP digest access authentication based notifications.
</li>
</ul>
</p>
</li>
<li>
<p>
Some platforms+JVMs have buffer size issues which caused some
workspace-control invocations to fail. This problem is addressed.
</p>
</li>
<li>
<p>
DHCP based network delivery to the VMs now requires unique
hostnames for each allocatable address (even if they do not
resolve to an IP). This addresses
<a href="http://bugzilla.globus.org/bugzilla/show_bug.cgi?id=5738">Bug #5738</a>.
</p>
</li>
</ul>
<i>1.3.1 - Reference clients</i>
<ul>
<li>
<p>
A new client <i>workspace-ensemble</i> allows you to destroy
all workspaces in a running ensemble as well as trigger the workspaces
in the ensemble to be co-scheduled and (afterwards) allowed to
launch. This trigger is also available in the last workspace
deployment of the ensemble, if desirable (this will save a web
services operation).
</p>
</li>
<li>
<p>
Enhancement
<a href="http://bugzilla.globus.org/bugzilla/show_bug.cgi?id=5795">Bug #5795</a>
is addressed, this allows an early unpropagate request to be sent.
The new <i>workspace</i> action is "--shutdown-save" and requires
a single or group workspace EPR.
</p>
</li>
<li>
<p>
The <i>workspace</i> program includes a new flag
"--trash-at-shutdown" which allows callers to include a request
that the service simply discards the VM after use (instead
of unpropagating it). This is typical behavior for virtual cluster
compute nodes, for example. The functionality itself is not
new in this release, just this flag. It allows you to include
the flag when using commandline based resource requests as
well as <i>override</i> a given resource request file with a
trash-at-shutdown flag.
</p>
</li>
<li>
<p>
The <i>workspace</i> program has improved output,
especially in the cases where you are launching groups and
ensembles.
</p>
</li>
</ul>
<i>1.3.1 - Control Agents</i>
<ul>
<li>
<p>
Note: a previously used TP1.2.3 or TP1.3 configuration file for
workspace-control will still work because of the nature of these
changes. See
<a href="doc/admin-index.html#migrating-workspaceVM">this
migration section</a> of the administrator's guide for details.
</p>
</li>
<li>
<p>
A bug with failed propagations has been addressed:
<a href="http://bugzilla.globus.org/bugzilla/show_bug.cgi?id=5681">Bug
#5681</a>.
</p>
</li>
<li>
<p>
Will now support older ISC DHCP versions (v2 servers). See
<a href="http://bugzilla.globus.org/bugzilla/show_bug.cgi?id=5740">Bug
#5470</a>.
</p>
</li>
<li>
<p>
The defaults paths for <i>ebtables</i> and the <i>dhcpd.conf</i>
file are now the more common occurences:
<ul>
<li><i>/sbin/ebtables</i> is now <i>/usr/sbin/ebtables</i></li>
<li><i>/etc/dhcp/dhcpd.conf</i> is now <i>/etc/dhcpd.conf</i></li>
</ul>
</p>
</li>
</ul>
<i>1.3.1 - Workspace pilot program</i>
<ul>
<li>
<p>
This is a new tarball on the download page and is only necessary
when using pilot based resource management.
</p>
</li>
</ul>
<a name="TP1.3"> </a>
<br><hr><br>
<i>1.3 - Summary</i>
<ul>
<li>
<p>
There was a WSDL update, changes and new namespaces.
</p>
</li>
<li>
<p>
Functionality to start multiple workspaces in one request was
added, including introduction of the Workspace Group Service.
</p>
</li>
<li>
<p>
Optional accounting functionality was added, including introduction
of the Workspace Status Service.
</p>
</li>
<li>
<p>
Configuration enhancements to make service administration easier.
</p>
</li>
<li>
<p>
Various client enhancements including group and status service
support, reorganized help output, and new commandline options.
</p>
</li>
<li>
<p>
Various bug fixes.
</p>
</li>
</ul>
<i>1.3 - Services</i>
<ul>
<li>
<p>
All remote interfaces, WSDLs/schemas, have been updated and also
have new namespaces. You can examine them directly online at the
<a href="examples/compact/index.html">WSDL and XSD files</a> page
(or read the descriptions on the
<a href="interfaces/index.html">Interfaces</a> section).
</p>
</li>
<li>
<p>
The <a href="interfaces/factory.html">Workspace Factory Service</a>
was extended to support starting a homogenous group of workspaces
in one deployment request. A global maximum group size can be
specified natively (without needing to use an authorization
callout).
</p>
</li>
<li>
<p>
The <a href="interfaces/groupservice.html">Workspace
Group Service</a> was added to manage groups after deployment.
See the <a href="interfaces/index.html#groupoverview">group
overview</a> on the main interfaces page.
</p>
</li>
<li>
<p>
Hooks for accounting modules were added. These plugins allow you
to track clients' used or reserved running time. There are
separate reader and writer interfaces for flexibility. A default
database backed implementation is provided and enabled by default.
By default this implementation includes a periodic write to log
files on the system (one for current reservations, another for
major events).
See <a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=5443">Bug 5443</a>.
</p>
</li>
<li>
<p>
The <a href="interfaces/statusservice.html">Workspace
Status Service</a> was added, it allows a Grid client to consult
the usage statistics that the service has tracked about it.
See <a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=5444">Bug 5444</a>.
</p>
</li>
<li>
<p>
Some configurations have been added, changed name or changed
location in the JNDI configuration file, see
<a href="doc/admin-index.html#migrating-workspaceVM">this
migration section</a> of the administrator's guide for details.
</p>
</li>
<li>
<p>
Resource selection now favors VMMs not in use. The previous
selection process accepted the first VMM with enough memory
which could result in a situation where e.g. two workspaces
are running on one VMM but no workspaces are running on another.
</p>
</li>
<li>
<p>
Resource pool configurations can now be adjusted without
resetting the database, see
<a href="doc/admin-index.html#migrating-workspaceVM">this
migration section</a> of the administrator's guide for details.
</p>
</li>
<li>
<p>
Networking address pool configurations can now be adjusted without
resetting the database, see
<a href="doc/admin-index.html#migrating-workspaceVM">this
migration section</a> of the administrator's guide for details.
</p>
</li>
<li>
<p>
Resolved <a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=5441">Bug 5441</a>:
Add functionality for late network binding to client and service.
</p>
</li>
<li>
<p>
Resolved <a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=5442">Bug 5442</a>:
Move persistence information to its own subdirectory. All
information is not stored under
<i>$GLOBUS_LOCATION/var/workspace_service/</i> instead of various
subdirectories of <i>$GLOBUS_LOCATION/var</i> itself.
</p>
</li>
<li>
<p>
Host certificate transfer functionality was removed. The
association configuration and WSDL has changed accordingly.
</p>
</li>
<li>
<p>
Resolved
<a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=5415">Bug 5415</a>:
WorkspacePersistenceDB not updated after workspace --shutdown
</p>
</li>
<li>
<p>
Resolved
<a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=5345">Bug 5345</a>:
resource not destroyed correctly when time expires and shutdown method is "trash"
</p>
</li>
<li>
<p>
Asynchronous notifications from workspace-control (propagation
events) are handled more reliably.
</p>
</li>
<li>
<p>
The toplevel build file includes many new convenience targets,
including more control over what is deployed/undeployed and more
control over the different kinds of persistence information.
</p>
</li>
<li>
<p>
The build files now do not proceed if your JDK is an earlier
version than 1.4.
</p>
</li>
</ul>
<i>1.3 - Reference clients</i>
<ul>
<li>
<p>
The help system was organized, run the client with "-h" to see the
definitive list and explanation of features old and new.
</p>
</li>
<li>
<p>
The client can subscribe and listen to many workspaces at a time
after deploying a group. As this can be quite verbose for large
groups, there are two new options to control subscription output
verbosity. See the "-h" text.
</p>
</li>
<li>
<p>
There is a <i>numnodes</i> argument that will control how many
workspaces will be requested during the create operation. If
there is a NodeNumber element in a given deployment request file,
this argument will override that. For more about group support,
see the <a href="interfaces/index.html">Interfaces</a> section.
</p>
</li>
<li>
<p>
The client can now run management commands using both regular and
group workspace EPRs (it looks at which it is dealing with).
</p>
</li>
<li>
<p>
Resolved <a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=5441">Bug 5441</a>:
Add functionality for late network binding to client and service.
In the default case where subscriptions are desired, the client
will notice if networking is missing and requery for it when the
workspace(s) move to the Running state.
</p>
</li>
<li>
<p>
Resolved <a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=5445">Bug 5445</a>:
various reference client improvements.
</p>
</li>
<li>
<p>
There is a new <i>workspace-status</i> client for querying
accounting information. See
<a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=5444">Bug 5444</a>.
</p>
</li>
<li>
<p>
The sample XML (metadata, resource request, etc) files included
with the client have been updated and more samples have been added.
</p>
</li>
<li>
<p>
The client build now checks that the sample XML (metadata,
resource request, etc) files validate against their respective
schemas. If your ant installation does not include the
<i>xmlvalidate</i> task, these checks are skipped.
</p>
</li>
</ul>
<i>1.3 - Control Agents</i>
<ul>
<li>
<p>
Note: a previously used TP1.2.3 configuration file for
workspace-control will still work because of the nature of these
changes. See
<a href="doc/admin-index.html#migrating-workspaceVM">this
migration section</a> of the administrator's guide for details.
</p>
</li>
<li>
<p>
Resolved <a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=5360">Bug 5360</a>:
destroy log shows dhcp/ebtables backout problem
</p>
</li>
<li>
<p>
<i>install.py</i> handles user groups better and has an improved
<i>--onlyverify</i> mode
</p>
</li>
<li>
<p>
Removed unecessary configurations from sample <i>worksp.conf</i>
file.
</p>
</li>
<li>
<p>
<i>ebtables-config.sh</i> rule backout handles an additional
corner case
</p>
</li>
</ul>
<i>1.3 - Internal (developers only)</i>
<ul>
<li>
<p>
JNDI class discovery is done differently, this may affect you if
you have alternate implementations of any module or plugin
interface. A new workspace Initializable interface can be used.
See the <i>org.globus.workspace.Locator</i> class.
</p>
</li>
<li>
<p>
Message intake and initial validation support is now implemented
as a plugin, see the
<i>org.globus.workspace.service.binding.BindingAdapter</i>
interface.
</p>
</li>
<li>
<p>
The default scheduler's "node picking" support is now implemented
as a plugin, see the
<i>org.globus.workspace.scheduler.defaults.SlotManagement</i>
interface.
</p>
</li>
<li>
<p>
AllocateAndConfigure (association) support is now implemented as a
plugin, see the
<i>org.globus.workspace.network.AssociationAdapter</i> interface.
</p>
</li>
<li>
<p>
New optional AccountingEventAdapter and AccountingReaderAdapter
plugins, see the <i>org.globus.workspace.accounting</i> package.
</p>
</li>
<li>
<p>
The optional creation-time authorization callout interface was
altered to include group requests as well as the caller's accrued
used and reserved running minutes (if an accounting reader is
running).
</p>
</li>
</ul>
<a name="TP1.2.3"> </a>
<br><hr><br>
<h3>TP1.2.3</h3>
<ul>
<li>
<p>
Significant documentation updates including the addition of a
guided <a href="doc/user-index.html">User Quickstart</a>
and the Workspace Marketplace.
</p>
</li>
<li>
<p>
Added the ability to specify multiple partitions for one VM.
There is a restriction in this version that only one partition
file may be used with the propagation mechanisms, the other
partitions must be cached or on a shared filesystem.
(<a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=5216">Bug 5216</a>)
</p>
</li>
<li>
<p>
Added the ability to create blank partitions on the fly if
the client specifies to do so by sending a storage request (the
MB of blank space needed) in the resource requirements.
</p>
<p>
Currently this hardcodes the filesystem to create on the blank
partition (the default is ext2), in the future this may be
specifiable by the client.
(<a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=5215">Bug 5215</a>)
</p>
</li>
<li>
<p>
Added an HTTP transfer adapter for pre- and post-deployment
staging. Included is the ability to provide checksums that
will be checked after the transfer as well as decompression
functionality. For more details, see the
<a href="interfaces/optional.html">Optional parameters</a>
documentation.
(<a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=5219">Bug 5219</a>)
</p>
</li>
<li>
<p>
Added the ability to choose hypervisors in the resource pool
based on what networking associations they support. For example,
a request may arrive for a workspace to have NICs on two
separate networks: the pool node selection algorithm will use
the requirement to support both of these networks in its search.
(<a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=5214">Bug 5214</a>)
</p>
</li>
<li>
<p>
The workspace types schema, <i>workspace_types.xsd</i>, has a
new namespace: the "2006/08" part of it is now "2007/03".
</p>
</li>
<li>
<p>
Resolved
<a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=5211">Bug 5211</a>:
networking allocations were not backed out (returned to pool)
under all error conditions during initial request processing.
</p>
</li>
<li>
<p>
Resolved
<a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=5212">Bug 5212</a>:
queries on the Workspace Factory resource properties gave
incorrect asssocaition information after a container restart.
</p>
</li>
<li>
<p>
Resolved
<a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=5213">Bug 5213</a>:
the Advisory IP acquisition method was being incorrectly validated.
</p>
</li>
<li>
<p>
Resolved
<a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=5217">Bug 5217</a>:
the workspace-control program was not backing out DHCP policy
additions under all error conditions.
</p>
</li>
</ul>
<a name="TP1.2.2"> </a>
<br><hr>
<h3>TP1.2.2 _NAMELINK(TP1.2.2)</h3>
<ul>
<li>
<p>
Added support for DHCP delivery of networking information. See
the administrator guide
<a href="doc/admin-index.html#workspaceVM-backend-config-invm-networking">DHCP
overview and configuration section</a> which also includes a link
to a design document.
</p>
</li>
<li>
<p>
Added unit tests under "workspace-service/service/java/test/".
</p>
</li>
<li>
<p>
Streamlined the logistics section of metadata, see the
<a href="interfaces/metadata.html#logistics">logistics section</a>
of the interfaces guide for more information.
</p>
</li>
<li>
<p>
Small bugfixes in StateTransition.
</p>
</li>
<li>
<p>
Internal refactoring to better accomodate unit tests.
</p>
</li>
</ul>
<a name="TP1.2.1"> </a>
<br><hr>
<h3>TP1.2.1</h3>
<ul>
<li>
<p>
Resolved <a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=4792">Bug 4792</a>
(propagation via globus-url-copy adds extra file URL scheme)
</p>
</li>
<li>
<p>
Resolved <a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=4793">Bug 4793</a>
(xenlocal arg parsing error)
</p>
</li>
<li>
<p>
Resolved <a href="http://bugzilla.globus.org/globus/show_bug.cgi?id=4879">Bug 4879</a>