This repository has been archived by the owner on Sep 23, 2020. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 82
/
features.html
463 lines (409 loc) · 16.2 KB
/
features.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
m4_include(/mcs/m4/worksp.lib.m4)
_NIMBUS_HEADER(Features)
_NIMBUS_HEADER2(n,n,y,n,n,n,n)
_NIMBUS_LEFT2_COLUMN
_NIMBUS_LEFT2_ABOUT_SIDEBAR(n,n,y,n)
_NIMBUS_LEFT2_COLUMN_END
_NIMBUS_CENTER2_COLUMN
_NIMBUS_2_5_DEPRECATED
<h2>Major Features</h2>
<div class="ulmoveleft">
<ul>
<a name="opensource"></a>
<li>
<h4>Open Source IaaS _NAMELINK(opensource)</h4></li>
<p>
Nimbus provides a 100% freely available and open source Infrastructure as a
service (IaaS) system. Every feature our community develops is freely
available with no upgrade costs.
</p>
<a name="cumulus"></a>
<li><h4>Storage Cloud Service _NAMELINK(cumulus)</h4></li>
Cumulus is storage cloud service that is compatible with the S3 REST
API. It can be used against many existing clients (boto, s3cmd, jets3t,
etc) to provide data storage and transfer services.
<a name="remotedep"></a>
<li>
<h4>Remote deployment and lifecycle management of VMs _NAMELINK(remotedep)</h4>
</li>
<p>
Nimbus clients can deploy, pause, restart and shutdown VMs.
</p>
<p>
On deployment, the client presents the workspace service with:
</p>
<ol>
<li>
<i>meta-data</i> (containing a pointer to the VM
image to use as well as configuration information
such as networking)
</li>
<li>
<i>resource allocation</i> (specifying what resources:
deployment time, CPUs, memory, etc. should be assigned
to the VM)
</li>
</ol>
<p>
Once a request for VM deployment is accepted by the workspace
service, a client can inspect various VM properties (e.g., its
lifecycle state, time-to-live, the IP address assigned to a VM
on deployment, or the resources assigned to the VM) via WSRF
resource properties/notifications or polling (such as EC2
describe-instances).
</p>
<p>
Before deployment, clients can discover the properties of site
configurations (e.g. what VMM is being supported on the site)
and match them against the meta-data of workspaces they want to
deploy (which describe for example what VMM is required for the
workspace).
</p>
<a name="awscompat"></a>
<li>
<h4>Compatibility with Amazons Network Protocols _NAMELINK(awscompat)</h4>
</li>
<p>
<a href="http://aws.amazon.com/ec2">EC2</a> based clients
written for EC2 can be used with Nimbus installations.
Both SOAP API and the REST API have been implemented in Nimbus.
For more information,
see <a href="#ec2-frontend">What is the EC2 frontend</a>?
</P>
<p> <a href="http://aws.amazon.com/s3">S3</a> REST API clients
can also be used for managing VM storage with the Nimbus system.
</p>
<a name="x509"></a>
<li>
<h4>Support X509 Credentials _NAMELINK(x509)</h4>
<p>
Users interested in a strong PKI security model can make use of
our WSRF interface which uses X509 certificates. While the main
feature here is strong security, it can also be a great convenience
for institutions that are already using DOE certificates or any other
certificate authority.
</p>
<a name="cloudclient"></a>
<li>
<h4>Easy to Use Cloud Client _NAMELINK(cloudclient)</h4>
</li>
<p>
The workspace cloud client allows authorized clients to access
many Workspace Service features in a user friendly way.
It is designed to get users up and running in a matter of
minutes, even from laptops, NATs, etc.
</p>
<p>
cloud-client is the easiest way to use both a storage cloud
and IaaS. Even the uninitiate find this fully integrated
tool easy to use.
</p>
<p>
See the <a href="/clouds/">clouds page</a> as well as a
behind-the-scenes overview of the service
<a href="doc/cloud.html">cloud
configuration</a>.
</p>
<a name="protocols"></a>
<li>
<h4>Multiple protocol support / Compartmentalized dependencies _NAMELINK(protocols)</h4>
<p>
The workspace service is an implementation of a strong "pure Java"
internal interface (see <a href="faq.html#rm-api">What is the RM
API</a>?) which allows multiple remote protocols to be supported as
well as differing underlying manager implementations.
</p>
<p>
There is currently one known manager implementation (the workspace
service) and two supported remote protocol sets:
</p>
<div class="uldonotmoveleft">
<ul>
<li>
<p>
<a href="http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wsrf">WSRF</a>
based: protocol implementation in longstanding use by previous
workspace services and clients including the cloud-client.
</p>
</li>
<li>
<p>
<a href="http://aws.amazon.com/ec2">EC2</a> based: clients
written for EC2 can be used with Nimbus installations. For
more information, see <a href="#ec2-frontend">What is the EC2 frontend</a>?
</p>
</li>
</ul>
<p>
These protocols happen to both be Web Services based and both
run in the <a href="http://ws.apache.org/axis/">Apache Axis</a>
based GT Java container. But neither thing is a necessity:
</p>
<ul>
<li>
<p>
There is nothing specific to web services based remote protocols
in the workspace service implementation, the messaging system
just needs to be able to speak to Java based libraries.
</p>
</li>
<li>
<p>
Workspace service dependencies have nothing to do with what
container it is running in, they are normal Java application
dependencies like
<a href="http://www.springframework.org/">Spring</a>,
<a href="http://ehcache.sourceforge.net/">ehcache</a>,
<a href="http://backport-jsr166.sourceforge.net/">backport-util-concurrent</a>,
and JDBC (currently using the embedded
<a href="http://db.apache.org/derby/">Derby</a> database).
</p>
</li>
</ul>
</div>
</li>
<a name="group"></a>
<li>
<h4>Flexible group management _NAMELINK(group)</h4>
<p>
The workspace service can start and manage groups of workspaces at
a time, as well as groups of groups ("ensembles") where each
group's VM images, resource allocation, duration, and node number
can be different. Groups and ensembles will be run in a
co-scheduled manner. That is, all group/cluster members will be
scheduled to run at same time or none will run, even when using
best-effort schedulers (see the <a href="#pilot">pilot section</a>
below).
</p>
<p>
Support for auto-configuration of these clusters (see the cloud
<a href="clouds/clusters.html">clusters</a> page).
</p>
</li>
<a name="accounting"></a>
<li>
<h4>Per-client usage tracking _NAMELINK(accounting)</h4>
<p>
The service can track deployment time (both used and currently
reserved) on a per-client basis which can be used in
authorization decisions about subsequent deployments. Clients
may query the service about their own usage history.
</p>
</li>
<a name="quotas"></a>
<li>
<h4>Per-user Storage Quota _NAMELINK(quotas)</h4>
<p>
Cumulus (the VM image repository manager for Nimbus) can be configured
to enforce per user storage usage limits. This is an especially
important feature for the scientific community where it is
not convenient to directly charge dollars and cents for storage
but where resources still need to be protected and rationed.
</p>
</li>
<a name="ana"></a>
<li>
<h4>Flexible request authentication and authorization _NAMELINK(ana)</h4>
<p>
The workspace service uses GSI to authenticate and authorize
creation requests. Among others, it allows a client to be authorized
based on VO/role information contained in the VOMS credentials
and attributes obtained via GridShib. Authorization policies
can also be applied to networking request, VM image files,
resource request, and time used/reserved by the client.
</p>
<p>
An included authorization setup (not enabled by default) allows
for straightforward group management. You can assign identities
to logical groups and then write policies about those groups.
You can set simultaneous reservation limits, reservation limits
that take past workspace usage into account, and detailed repository
node and path checks.
</p>
</li>
<a name="usermanage"></a>
<li>
<h4>Easy user management _NAMELINK(usermanage)</h4>
<p>
New in Nimbus 2.5 are a set of user management tools that
make administering a Nimbus cloud significantly easier.
The tools are both easy to use and scriptable.
</p>
</li>
<a name="config"></a>
<li>
<h4>Configuration management (deployment request) _NAMELINK(config)</h4>
<p>
Some configuration operations need to be finished at
deployment-time because they require information that becomes
available only late in the deployment process (such as network
address assignments, physical host assignments, etc.).
</p>
<p>
The workspace service provides optional mechanisms to carry out
such configuration management actions. Configuration actions
available are DHCP delivery of network assignments and arbitrary
file based customizations (mount + alter image).
</p>
<p>
Also see <a href="#ctx">one-click clusters</a>
</p>
</li>
<a name="ctx"></a>
<li>
<h4>One-click clusters (contextualization) _NAMELINK(ctx)</h4>
<p>
See the cloud <a href="clouds/clusters.html">clusters</a> page for
how auto-configuration of entire clusters (contextualization)
is supported by the science clouds. This allows the cloud client
to launch "one-click" clusters whose nodes securely configure
themselves to operate in new network and security environments.
</p>
</li>
<a name="client"></a>
<li>
<h4>Workspace client _NAMELINK(client)</h4>
<p>
The workspace client allows authorized clients to access
all Workspace Service features. The current release contains
a Java reference implementation.
</p>
</li>
<a name="net"></a>
<li>
<h4>VM network configuration (deployment request) _NAMELINK(net)</h4>
<p>
The workspace service allows a client to configure networking
for the VM accommodating several flexible options (allocating
new network address from a site pool, bridging an existing
address, etc.).
</p>
<p>
In particular, a client can request configuring a VM on startup
with several different NICs allocating different addresses from
different pools (e.g., public and private, thus implementing
the Edge Service requirement).
</p>
<p>
There are mechanisms for a site to set aside such address pools
for the VMs as well as tools intercepting the VM's DHCP requests
to deliver the right addresses.
</p>
</li>
<a name="backend"></a>
<li>
<h4>Xen backend plugin _NAMELINK(backend)</h4>
<p>
The current workspace backend plugin is for the Xen
hypervisor, an open source, efficient implementation.
</p>
</li>
<a name="lrm"></a>
<li>
<h4>Local resource management plugin _NAMELINK(lrm)</h4>
<p>
The workspace service provides a local resource manager with
the capability to manage a pool of nodes on which VMs are
deployed to accommodate the service deployment model
(as opposed to a batch deployment model).
</p>
<p>
To use it, the pool nodes are configured with a lightweight
Python management script called workspace-control.
</p>
<p>
Besides interfacing with Xen, workspace-control maps networking
requests to the proper bridge interfaces, controls file isolation
between different workspace instances, interfaces with ebtables
and DHCP for IP address delivery, and can accomplish local
transfers (file propagation from the WAN accessible image node)
in daemonized mode.
</p>
</li>
<a name="pilot"></a>
<li>
<h4>Non-invasive site scheduler integration _NAMELINK(pilot)</h4>
<p>
When using the local resource management <a href="#lrm">plugin</a>,
(the default), a set of VMM resources will be managed entirely by
the workspace service.
But it can alternatively be integrated with
a site's scheduler/resource manager (such as PBS) using the
<b>workspace pilot</b> program.
</p>
<p>
This allows a dual use grid cluster to be achieved: regular jobs
can run on a VMM node that hosting no guest VMs; but if the node
is allocated to the workspace service (at the service's request),
VMs can be used. The site resource manager maintains full control
over the cluster and <i>does not need to be modified</i>.
</p>
<p>
Many safeguards are included to ensure nodes are cleanly
returned to their normal non-VM-hosting state, including protection
against the workspace service not being available, site (resource
manager based) early cancellation, node reboots, and to
provide a "worst case scenario" contingency it includes a
one-command "kill 9" facility for administrators.
</p>
</li>
<a name="fine"></a>
<li>
<h4>VM fine-grain resource usage enforcement (resource allocation) _NAMELINK(fine)</h4>
<p>
The workspace service allows the client to specify (ask for)
the resource allocation to be assigned to a VM and manage that
resource allocation during deployment. In the current release only
memory and deployment time are managed.
</p>
</li>
<!--
<li>
<a name="staging"></a>
<h4>Image staging adapters</h4>
<p>
Based on request from users we added adapter-based staging to
the workspace service; the adapters include support for HTTP GET
and the Globus Reliable File Transfer Service (RFT). The staging
mechanism can seamlessly handle delegation.
</p>
</li>
-->
</ul>
</div>
<br />
<br />
<p>
For more details, see the current release's
<a href="index.html">documentation</a> and
the Nimbus <a href="faq.html">FAQ</a>.
</p>
<!-- This page intentionally left blank -->
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
_NIMBUS_CENTER2_COLUMN_END
_NIMBUS_FOOTER1
_NIMBUS_FOOTER2
_NIMBUS_FOOTER3