Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Fetching contributors…

Cannot retrieve contributors at this time

953 lines (858 sloc) 32.719 kb
m4_include(/mcs/m4/worksp.lib.m4)
_NIMBUS_HEADER(FAQ)
_NIMBUS_HEADER2(n,n,y,n,n,n,n)
_NIMBUS_LEFT2_COLUMN
_NIMBUS_LEFT2_ABOUT_SIDEBAR(n,y,n,n)
_NIMBUS_LEFT2_COLUMN_END
_NIMBUS_CENTER2_COLUMN
_NIMBUS_IS_DEPRECATED
<h2>Frequently Asked Questions</h2>
<ul>
<li>
<a href="#nimbus">What is Nimbus</a>?
</li>
<li>
<a href="#cloudkit">What is the main way to deploy Nimbus</a>?
</li>
<li>
<a href="#install">Is Nimbus hard to install</a>?
</li>
<li>
<a href="#nimbus-main-components">What are the main Nimbus components</a>?
</li>
<li>
<a href="#workspace-service">What is the Workspace Service</a>?
</li>
<li>
<a href="#wsrf-frontend">What is the WSRF frontend</a>?
</li>
<li>
<a href="#ec2-frontend">What is the EC2 frontend</a>?
</li>
<li>
<a href="#ec2-messaging">What EC2 operations are supported</a>?
</li>
<li>
<a href="#metadata-server">What is the metadata server</a>?
</li>
<li>
<a href="#metadata-fields">What metadata server operations are supported</a>?
</li>
<li>
<a href="#cloud-client">What is the cloud client</a>?
</li>
<li>
<a href="#reference-client">What is the reference client</a>?
</li>
<li>
<a href="#wpilot">What is the Workspace Pilot</a>?
</li>
<li>
<a href="#rm-api">What is the RM API</a>?
</li>
<li>
<a href="#wcontrol">What is workspace-control</a>?
</li>
<li>
<a href="#ctxbroker">What is the Context Broker</a>?
</li>
<li>
<a href="#ctxagent">What is the Context Agent</a>?
</li>
<li>
<a href="#nimbusweb">What is Nimbus Web</a>?
</li>
<li>
<a href="#cumulus">What is Cumulus</a>?
</li>
<li>
<a href="#cumulusclients">What clients can I use with Cumulus</a>?
</li>
<li>
<a href="#cumulusnimbus">How are Nimbus and Cumulus related</a>?
</li>
<li>
<a href="#cumulusnonimbus">Can Cumulus be installed without Nimbus</a>?
</li>
<li>
<a href="#nimbusnocumulus">Can Nimbus be installed without Cumulus</a>?
</li>
<li>
<a href="#cumulusprop">Does the Nimbus IaaS system directly use Cumulus for image propagation</a>?
</li>
<li>
<a href="#cumulusreliable">How reliable is Cumulus</a>?
</li>
<li>
<a href="#cumulusbackend">What type of storage system backs the Cumulus repository</a>?
</li>
<li>
<a href="#lantorrent">What is LANTorrent</a>?
</li>
<li>
<a href="#license">How is the software licensed</a>?
</li>
</ul>
<h2>&nbsp;</h2>
<br />
<div class="ulmoveleft">
<ul>
<li>
<p>
<a name="nimbus"> </a>
<b>What is Nimbus? _NAMELINK(nimbus)</b>
</p>
<p>
Nimbus is a set of open source tools that together provide
an "Infrastructure-as-a-Service" (IaaS) cloud computing
solution. Our mission is to evolve the infrastructure
with emphasis on the needs of science, but many
non-scientific use cases are supported as well.
</p>
<p>
Nimbus allows a client to lease remote resources by deploying
virtual machines (VMs) on those resources and configuring
them to represent an environment desired by the user.
</p>
<p>
It was formerly known as the "Virtual Workspace Service" (VWS)
but the "workspace service" is technically just one the components
in the software
<a href="#nimbus-main-components">collection</a>.
</p>
</li>
<li>
<p>
<a name="cloudkit"> </a>
<b>What is the main way to deploy Nimbus? _NAMELINK(cloudkit)</b>
</p>
<p>
Options aren't always a good thing, especially to start with. The
main way to deploy Nimbus is the cloud configuration. This
involves hosting a site manager service and creating an image
repository (see the
<a href="admin/z2c/">Zero To Cloud guide</a> for details). You direct your new users to use the
<a href="clouds/cloudquickstart.html">cloud client</a> which
gets them up and running in just a few minutes.
</p>
<p>
<i>Overview of the cloud configuration:</i>
</p>
<img src="img/cloud-overview.png"
alt="cloud overview pic" />
</li>
<li>
<p>
<a name="install"> </a>
<b>Is Nimbus hard to install? _NAMELINK(install)</b>
</p>
<p>
Nimbus itself is not hard to install, it has a script driven install
that asks you two questions.
</p>
<p>
Nimbus requires that some dependencies are installed first. On the
service node: Java, Python, and bash. On the
hypervisor nodes: Python, bash, ebtables, libvirt and KVM or Xen.
</p>
<p>
All of these things are installable via the package management
system of all the popular Linux distributions.
</p>
<p>
See the <a href="admin/z2c/">Zero To Cloud</a> guide for details
including detailed
<a href="admin/z2c/service-dependencies.html">prerequisite information</a>.
</p>
</li>
<li>
<p>
<a name="nimbus-main-components"> </a>
<b>What are the main Nimbus components? _NAMELINK(nimbus-main-components)</b>
</p>
<div class="uldonotmoveleft">
<ul>
<li>
<p>
The <a href="#workspace-service">Workspace Service</a> site
manager
</p>
</li>
<li>
<p>
A <a href="#wsrf-frontend">WSRF based</a> remote protocol
implementation
</p>
</li>
<li>
<p>
An <a href="#ec2-frontend">EC2 based</a> remote protocol
implementation of their SOAP and Query APIs (<a href="#ec2-messaging">partial</a>)
</p>
</li>
<li>
<p>
<a href="#cumulus">Cumulus</a> is an open source implementation of the Amazon S3 REST API. It is used as the Nimbus repository solution and can also be installed standalone.
</p>
</li>
<li>
<p>
The <a href="#rm-api">RM API</a> bridge between
remote protocols/security and specific site manager
implementations.
</p>
</li>
<li>
<p>
The <a href="#cloud-client">cloud client</a> aims to get
users up and running in minutes with instance launches
and one-click clusters.
</p>
</li>
<li>
<p>
The <a href="#reference-client">reference client</a>
exposes the entire feature set in the WSRF protocol as
a commandline client (with underlying Java client library).
For advanced uses, scripting, portal integration, etc.
</p>
</li>
<li>
<p>
The <a href="#wpilot">Workspace Pilot</a> allows you to
integrate VMs with resources already configured to manage
jobs (i.e., already using a batch scheduler like PBS).
</p>
</li>
<li>
<p>
The <a href="#wcontrol">workspace-control</a> agent implements
VMM and network specific tasks on each hypervisor.
</p>
</li>
<li>
<p>
The <a href="#ctxbroker">Context Broker</a> allows clients
to coordinate large virtual cluster launches automatically
and repeatably.
</p>
</li>
<li>
<p>
The <a href="#ctxagent">Context Agent</a> lives on VMs and
interacts with the Context Broker at VM boot.
</p>
</li>
</ul>
</div>
<p>
The components are lightweight and self-contained so that they
can be selected and composed in a variety of ways. For example,
using the workspace service with the pilot will enable a different
cluster integration strategy. You can mix and match protocol
implementations with the "pure Java" resource management module.
</p>
<p>
Writing new components should be a matter of "dropping" them
in. As explained in
"<a href="#rm-api">What is the RM API</a>?", the Java side of things
is particularly LEGO&#0174; like.
As of Nimbus 2.3 workspace-control (the VMM component) is modularized with around 10 plugin points.
And we are working towards modularizing
even more and providing better implementations for various
components.
</p>
<p>
Any questions, suggestions, and requirements in this
area are appreciated.
</p>
</li>
<li>
<p>
<a name="workspace-service"> </a>
<b>What is the Workspace Service? _NAMELINK(workspace-service)</b>
</p>
<p>
The Workspace service is a standalone site VM manager that different
remote protocol frontends can invoke.
</p>
<p>
The current supported protocols are Web Services based or HTTP based. They all run in either an <a href="http://ws.apache.org/axis/">Apache Axis</a>
based Java container or <a href="http://cxf.apache.org/">Apache CXF</a>. But there is only a certain level of necessity:
</p>
<div class="uldonotmoveleft">
<ul>
<li>
<p>
There is nothing specific to web services based remote protocols
in the workspace service implementation, the messaging system
just needs to be able to speak to Java based libraries.
</p>
</li>
<li>
<p>
Workspace service dependencies have nothing to do with what
container it is running in, they are normal Java application
dependencies like
<a href="http://www.springframework.org/">Spring</a>,
<a href="http://ehcache.sourceforge.net/">ehcache</a>,
<a href="http://backport-jsr166.sourceforge.net/">backport-util-concurrent</a>,
and JDBC (currently using the embedded
<a href="http://db.apache.org/derby/">Derby</a> database).
</p>
</li>
</ul>
</div>
</li>
<li>
<p>
<a name="wsrf-frontend"> </a>
<b>What is the WSRF frontend? _NAMELINK(wsrf-frontend)</b>
</p>
<p>
This is the protocol implementation in longstanding use by previous
workspace services and clients including the popular cloud-client.
</p>
</li>
<li>
<p>
<a name="ec2-frontend"> </a>
<b>What is the EC2 frontend?</b>
</p>
<p>
This is an implementation of two of the Amazon
<a href="http://aws.amazon.com/ec2">Elastic Compute Cloud</a> (EC2)
interfaces that allow you to use clients
developed for the real EC2 system against Nimbus based clouds.
</p>
<p>
There is support for both EC2 interfaces: SOAP and Query.
</p>
<p>
See <a href="#ec2-messaging">What EC2 operations are supported</a>?
</p>
</li>
<li>
<p>
<a name="ec2-messaging"> </a>
<b>What EC2 operations are supported? _NAMELINK(ec2-messaging)</b>
</p>
<p>
(See <a href="#ec2-frontend">What is the EC2 frontend</a>?)
</p>
<p>
Nimbus provides a partial protocol implementation of EC2's
WSDL (namespace <i>http://ec2.amazonaws.com/doc/2009-08-15/</i>,
a previous version supported <i>2008-05-05</i>) and the Query API
complement to that WSDL.
The operations behind these EC2 commandline clients are currently
provided:
</p>
<div class="uldonotmoveleft">
<ul>
<li>
<p>
ec2-describe-images - See what images in your personal cloud
directory you can run.
</p>
</li>
<li>
<p>
ec2-run-instances - Run images that are in your personal cloud
directory.
</p>
</li>
<li>
<p>
ec2-describe-instances - Report on currently running instances.
</p>
</li>
<li>
<p>
ec2-terminate-instances - Destroy currently running instances.
</p>
</li>
<li>
<p>
ec2-reboot-instances - Reboot currently running instances.
</p>
</li>
<li>
<p>
ec2-add-keypair [*] - Add personal SSH public key that can be
installed for root SSH logins
</p>
</li>
<li>
<p>
ec2-delete-keypair - Delete keypair mapping.
</p>
</li>
</ul>
<p>
[*] - There are two options for add-keypair implementations that
can be chosen by the administrator in the conf file:
</p>
<ul>
<li>
<p>
One is the normal implementation where the
server-side generates a private and public key (using
<a href="http://www.jcraft.com/jsch/">jsch</a>) and delivers
the private key to you.
</p>
</li>
<li>
<p>
The other (configured by default) is a break from the
regular semantics. It allows the keypair "name" you
send in the request to be the name AND the public key value.
This means there is never a private key server-side and
also that you can use keys you aready have created
on your system. (In a sense, this is
<b>add</b>-keypair as opposed to the normal behavior
which should perhaps be named <b>create</b>-keypair).
</p>
</li>
</ul>
</div>
</li>
<li>
<p>
<a name="metadata-server"> </a>
<b>What is the metadata server? _NAMELINK(metadata-server)</b>
</p>
<p>
The metadata server responds to HTTP queries from VMs, using the same path names
as the <a href="http://docs.amazonwebservices.com/AWSEC2/latest/DeveloperGuide/index.html?AESDG-chapter-instancedata.html">EC2 metadata server</a>
</p>
<p>
The URL for this is obtained by looking at '<i>/var/nimbus-metadata-server-url</i>'
on the VM, which is an optional customization task injected by the Nimbus service
on your behalf (we are considering trying to simulate Amazon's hardcoded IP address
"169.254.169.254" on any subnet, feedback on this idea is appreciated).
</p>
<p>
Like on EC2, its responses are based on the source IP address from the TCP packet,
giving the information specific to each VM instance. This also means there is an
assumption that the immediately local network is non-spoofable. Administrators,
you should also put in place a firewall rule that restricts this port to the VMs
only, just in case.
</p>
<p>
The metadata server is disabled by default, consult your administrator (or try
a query from inside your VM).
</p>
<p>
Administrators, see "services/etc/nimbus/workspace-service/metadata.conf" for
the details.
</p>
</li>
<li>
<p>
<a name="metadata-fields"> </a>
<b>What metadata server fields are supported? _NAMELINK(metadata-fields)</b>
</p>
<p>
(See <a href="#metadata-server">What is the metadata server</a>?)
</p>
<p>
Nimbus provides a partial implementation of EC2's version of the metadata server
(<a href="http://docs.amazonwebservices.com/AWSEC2/latest/DeveloperGuide/index.html?instancedata-data-categories.html">their full field listing</a>).
These fields are currently supported:
</p>
<div class="uldonotmoveleft">
<ul>
<li>
<p>
<i>user-data</i> - "opaque" information injected by the client at
launch time
</p>
</li>
<li>
<p>
<i>meta-data/ami-id</i> - the ami-id assigned to this image. This
is simulated by the EC2 protocols in Nimbus, the "definitive" piece
of information for a launch is really the filename in the repository,
there is not AMI registry like on EC2.
</p>
</li>
<li>
<p>
<i>meta-data/ami-launch-index</i> - if this VM instance was launched
as part of a group (cluster), it might have a launch index other than
zero. This differentiates it from other homogenous nodes in the launch.
</p>
</li>
<li>
<p>
<i>meta-data/local-hostname</i> - the 'private' hostname of this VM [1]
</p>
</li>
<li>
<p>
<i>meta-data/local-ipv4</i> - the 'private' IP of this VM [1]
</p>
</li>
<li>
<p>
<i>meta-data/public-ipv4</i> - the 'public' hostname of this VM [1]
</p>
</li>
<li>
<p>
<i>meta-data/public-ipv4</i> - the 'public' IP of this VM [1]
</p>
</li>
</ul>
<p>
[1] - What 'public' and 'private' mean in this context is up to an
administrator configuration. The VM also may or may not have two NICs
on it, the values of these fields might be equal or not.
</p>
</div>
</li>
<li>
<p>
<a name="cloud-client"> </a>
<b>What is the cloud client? _NAMELINK(cloud-client)</b>
</p>
<p>
The cloud client aims to get users up and running in minutes with
instance launches and one-click clusters, even from laptops, NATs,
etc. See the cloud client
<a href="clouds/cloudquickstart.html">quickstart</a> and
<a href="clouds/clusters.html">cluster quickstart</a> to see what
it can do.
</p>
</li>
<li>
<p>
<a name="reference-client"> </a>
<b>What is the reference client? _NAMELINK(reference-client)</b>
</p>
<p>
The reference client exposes all features of the <a href="#wsrf-frontend">WSRF
frontend</a> as a commandline client. It is relatively complex
to use and thus typically wrapped by task-specific scripts.
</p>
<p>
Internally, it's implemented around a base Java client API suitable
for portal integration or any programmatic usage. Docs
on this API are forthcoming but if you are interested check out
<i>org.globus.workspace.client_core</i> in the client source tree
(contains Javadoc comments and also consult example usages in the
<i>org.globus.workspace.client.modes</i> package).
</p>
</li>
<li>
<p>
<a name="wpilot"> </a>
<b>What is the Workspace Pilot? _NAMELINK(wpilot)</b>
</p>
<p>
The pilot is a program the service will submit to a local site
resource manager (LRM) in order to obtain time on the VMM nodes. When
not allocated to the workspace service, these nodes will be used
for jobs as normal (the jobs run in normal system accounts in Xen
domain 0 with no guest VMs running).
</p>
<p>
Several extra safeguards have been added to make sure the node is
returned from VM hosting mode at the proper time, including
support for:
</p>
<div class="uldonotmoveleft">
<ul>
<li>
the workspace service being down or malfunctioning
</li>
<li>
LRM preemption (including deliberate LRM job cancellation)
</li>
<li>
node reboot/shutdown
</li>
</ul>
</div>
<p>
Also included is a one-command "kill 9" facility for administrators
as a "worst case scenario" contingency.
</p>
<p>
Using the pilot is optional. By default the service does not
operate with it, the service instead directly manages the nodes
it is configured to manage.
</p>
</li>
<li>
<p>
<a name="rm-api"> </a>
<b>What is the RM API? _NAMELINK(rm-api)</b>
</p>
<p>
Most things having to do with the Java server side components are
very flexible, featuring an extensibility system that allows for
customization and replacement at runtime of various behaviors.
By employing the
<a href="http://www.springframework.org/">Spring</a>
framework's "Dependency Injection" system, the Java components are
virtually like LEGO&#0174; blocks.
</p>
<p>
One of the very strong internal interfaces here is the site resource
management module which allows the remote security and protocol
implementations and semantics to be separate from one consistent
set of management operations. The implementing module governs how
and when callers get VMs, it assigns resources to use, and takes
them away at the appropriate times, etc.
</p>
</li>
<li>
<p>
<a name="wcontrol"> </a>
<b>What is workspace-control? _NAMELINK(wcontrol)</b>
</p>
<p>
Program installed on each VMM node used to (1) to start, stop and
pause VMs, (2) implement VM image reconstruction and management,
(3) securely connect the VMs to the network, and (4) to deliver
contextualization information (see Context Broker).
</p>
<p>
Currently, the workspace control tools work with Xen and KVM.
</p>
<p>
Implemented in Python in order to be portable and easy to install.
Requires libvirt, sudo, ebtables, and a DHCP server library.
</p>
</li>
<li>
<p>
<a name="ctxbroker"> </a>
<b>What is the Context Broker? _NAMELINK(ctxbroker)</b>
</p>
<p>
This is a service that allows clients to coordinate large virtual
cluster launches automatically and repeatably.
</p>
<p>
Used to deploy "one-click" virtual clusters that function right
after launch as opposed to launching a set of "unconnected"
virtual machines like most VM-on-demand services give you.
It also provides a facility to "personalize" VMs (seed them with
secrets, access policies, and just-in-time configurations).
This requires that the VMs run a lightweight script at boot time
called the <a href="#ctxagent">Context Agent</a>.
</p>
<p>
This is a user-oriented system that runs as an "overlay" on top of
the normal VM-on-demand mechanics. It's been used on top of Nimbus
clouds as well as with EC2 resources.
</p>
<p>
See the <a href="clouds/clusters2.html">one-click clusters
guide</a> for more detail and the
<a href="clouds/clusters.html">one-click cluster example</a> to
show just one of the many things this can be used to accomplish.
</p>
</li>
<li>
<p>
<a name="ctxagent"> </a>
<b>What is the Context Agent? _NAMELINK(ctxagent)</b>
</p>
<p>
A lightweight agent on each VM -- its only dependencies are
Python and the ubiquitous curl program -- securely contacts the
context broker using a secret key. This key was created on the fly
and seeded inside the instance. This agent gets information
concerning the cluster from the context broker and then causes
last minute changes inside the image to adapt to the environment.
</p>
<p>
See <a href="#ctxbroker">What is the Context Broker</a>?
Download it from this one-click clusters
guide <a href="clouds/clusters2.html#custom">section</a>.
</p>
</li>
<li>
<p>
<a name="nimbusweb"> </a>
<b>What is Nimbus Web? <span class="namelink"><a href="#nimbusweb">(#)</a></span></b>
</p>
<p>
Nimbus Web is the evolving web interface for Nimbus. Its aim is
to provide administrative and user functions in a friendly
interface.
</p>
<p>
Nimbus Web is centered around a Python Django web application that is intended to be
deployable completely separate from the Nimbus service. Instructions for configuring
and starting the application are in <a href="admin/reference.html#nimbusweb-config">this
section</a> of the <a href="admin/index.html">administrator guide</a>.
</p>
<p>
Existing features:
</p>
<ul>
<li>User X509 certificate management and distribution</li>
<li>Query interface authentication token management</li>
<li>Cloud configuration functionality</li>
</ul>
<p>
Forthcoming features:
</p>
<ul>
<li>Visualization of cloud usage data</li>
</ul>
</li>
<li>
<p>
<a name="cumulus"> </a>
<b>What is Cumulus? _NAMELINK(cumulus)</b>
</p>
<p>
Cumulus is an open source implementation of the S3 REST API. Some features
such as versioning and COPY are not yet implemented, but some additional
features are added, such as file system usage quotas.
</p>
</li>
<li>
<p>
<a name="cumulusclients"> </a>
<b>What clients can I use with Cumulus? _NAMELINK(cumulusclients)</b>
</p>
<p>
Cumulus is compliant with the S3 REST network API, therefore clients
that work against the S3 REST API should work with Cumulus. Some
of the more popular ones are boto and s3cmd. The Nimbus cloud client
uses the Jets3t library to interact with Cumulus.
</p>
</li>
<li>
<p>
<a name="cumulusnimbus"> </a>
<b>How are Nimbus and Cumulus related? _NAMELINK(cumulusnimbus)</b>
</p>
<p>
Cumulus is the front end to the Nimbus VM image repository. In order to
boot an image on a given Nimbus cloud, that image must first be put into
that same clouds Cumulus repository (advanced use cases can by pass this).
</p>
</li>
<li>
<p>
<a name="cumulusnonimbus"> </a>
<b>Can Cumulus be installed without Nimbus? _NAMELINK(cumulusnonimbus)</b>
</p>
<p>
Yes. Cumulus does not rely on any higher level Nimbus libraries and thus
users who wish to install it as a stand alone front end to their storage
system may do so.
</p>
</li>
<li>
<p>
<a name="nimbusnocumulus"> </a>
<b>Can Nimbus be installed without Cumulus? _NAMELINK(nimbusnocumulus)</b>
</p>
<p>
No. Nimbus version 2.5 and higher is packaged with Cumulus and so Nimbus
is intimately
aware of Cumulus. Nimbus must be installed with the version of Cumulus with
which it is packages.
</p>
</li>
<li>
<p>
<a name="cumulusprop"> </a>
<b>Does the Nimbus IaaS system directly use Cumulus for image propagation? _NAMELINK(cumulusprop)</b>
</p>
<p>
No. While Cumulus is the primary interface for transfer
images in and out of the cloud, it is not the mechanism by which images
are propagated from the repository to the virtual machine monitors.
Propagation is done in a variety of different ways, many of which we
are still developing and researching in order to find the best solution
for scientific users.
</p>
</li>
<li>
<p>
<a name="cumulusreliable"> </a>
<b>How reliable is Cumulus? _NAMELINK(cumulusreliable)</b>
</p>
<p>
The reliability of Cumulus depends entirely on the storage system that is backing it. In order to achieve S3 levels of reliability you need S3 levels of hardware investment but with our system even small providers can still be S3 protocol compliant while making an independent choice on cost/reliability.
</p>
</li>
<li>
<p>
<a name="cumulusbackend"> </a>
<b>What type of storage system backs the Cumulus repository? _NAMELINK(cumulusbackend)</b>
</p>
<p>
In the first release of Cumulus we are only providing a posix filesystem
backend storage system. However this is a very powerful plugin. It
can be used against a variety of storage systems including PVFS, GFS, and
HDFS (under a FUSE module). We have prototyped HDFS and BlobSeer plugins
and we will be releasing them soon.
</p>
</li>
<li>
<p>
<a name="lantorrent"> </a>
<b>What is LANTorrent? _NAMELINK(lantorrent)</b>
</p>
<p>
LANTorrent is a file distribution protocol integrated into the Nimbus
IaaS toolkit. It works as a means to multi-cast virtual machine images
to many backend nodes. The protocol is optimized for propagating
virtual machine images (typically large files) from a central repository
across a LAN to many virtual machine monitor nodes.
</p>
</li>
<li>
<p>
<a name="lantorrentenable"> </a>
<b>How do I enable LANTorrent? _NAMELINK(lantorrentenable)</b>
</p>
<p>
See the document:
<a href="admin/reference.html#lantorrent"><b>here</b></a>.
</p>
</li>
<li>
<p>
<a name="license"> </a>
<b>How is the software licensed? _NAMELINK(license)</b>
</p>
<p>
Nimbus is licensed under the terms of the
<a href="http://www.apache.org/licenses/LICENSE-2.0"><b>Apache
License version 2</b></a>.
</p>
</li>
</ul>
</div>
<!-- force blankspace at the bottom such that questions near the end of the list
appear towards the top of browser window -->
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
_NIMBUS_CENTER2_COLUMN_END
_NIMBUS_FOOTER1
_NIMBUS_FOOTER2
_NIMBUS_FOOTER3
Jump to Line
Something went wrong with that request. Please try again.