Skip to content
This repository
Fetching contributors…

Cannot retrieve contributors at this time

file 2377 lines (2029 sloc) 86.558 kb
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377
m4_include(/mcs/m4/worksp.lib.m4)
_NIMBUS_HEADER(2.7 Admin Reference)
_NIMBUS_HEADER2(n,n,y,n,n,n,n)
_NIMBUS_LEFT2_COLUMN
_NIMBUS_LEFT2_ADMIN_SIDEBAR(n,n,y,n,n)
_NIMBUS_LEFT2_COLUMN_END
_NIMBUS_CENTER2_COLUMN
_NIMBUS_IS_DEPRECATED


<h2>Nimbus 2.7 Admin Reference</h2>

<p>
    This section explains some side tasks as well as some
    none-default configurations.
</p>

<ul>
    <li>
        <p>
            <a href="#confs">Notes on conf files</a>
        </p>
    </li>
    <li>
        <p>
            <a href="#cloud-overview">Cloud configuration overview</a>
        </p>
    </li>
    <li>
        <p>
            <a href="#cloud-userexp">Cloud configuration user experience</a>
        </p>
    </li>
    <li>
        <p>
            <a href="#cloud-assumptions">Cloud configuration assumptions and defaults</a>
        </p>
    </li>
    <li>
        <p>
            <a href="#cloud-necessary">Cloud configuration, necessary configurations</a>
        </p>
    </li>
    <li>
        <p>
            <a href="#cloud-properties">Cloud configuration property reference</a>
        </p>
    </li>
    <li>
        <p>
            <a href="#user-management">User management</a>
        </p>
    </li>
    <li>
        <p>
            <a href="#group-authz">Per-user rights and allocations</a>
        </p>
    </li>
    <li>
        <p>
            <a href="#elastic">Enabling the EC2 SOAP frontend</a>
        </p>
    </li>
    <li>
        <p>
            <a href="#query">Configuring the EC2 Query frontend</a>
        </p>
    </li>
    <li>
        <p>
            <a href="#service-ports">Changing the network ports of Nimbus and Cumulus</a>
        </p>
    </li>
<li>
        <p>
            <a href="#nimbusweb-config">Configuring the Nimbus Web interface</a>
        </p>
    </li>
    <li>
        <p>
            <a href="#nimbusweb-usage">Using the Nimbus Web interface</a>
        </p>
    </li>
    <li>
        <p>
            <a href="#cert-pointing">Configuring a different host certificate</a>
        </p>
    </li>
    <li>
        <p>
            <a href="#manual-nimbus-basics">Configuring Nimbus basics manually without the auto-configuration program</a>
        </p>
    </li>
    <li>
        <p>
            <a href="#resource-pool-and-pilot">Resource pool and pilot configurations</a>
        </p>
        <ul>
            <li>
                <a href="#resource-pool">Resource pool</a>
            </li>
            <li>
                <a href="#pilot">Pilot</a>
            </li>
        </ul>
    </li>
    <li>
        <p>
            <a href="#backend-config-invm-networking">Network configuration
                details</a>
        </p>
    </li>
    <li>
        <p>
            <a href="#context-broker-standalone">Configuring a standalone context broker</a>
        </p>
    </li>

    <li>
        <p>
            <a href="#cumulus">Cumulus</a>
        </p>
        <ul>
            <li>
                <a href="#cumulus-config">Configuration</a>
            </li>
            <li>
                <a href="#repository-location">Repository Location</a>
            </li>
            <li>
                <a href="#boto">Using the boto client</a>
            </li>
            <li>
                <a href="#s3cmd">Using the s3cmd client</a>
            </li>
            <li>
                <a href="#cumulus-https">Using HTTPS</a>
            </li>
            <li>
                <a href="#cumulus-quotas">Disk Usage Quota</a>
            </li>
            <li>
                <a href="#cumulusnimbusconfig">Configuring Cumulus Options in Nimbus</a>
            </li>
        </ul>
    </li>

    <li>
        <p>
            <a href="#lantorrent">LANTorrent</a>
        </p>
        <ul>
            <li>
                <a href="#lantorrent-protocol">Protocol</a>
            </li>
            <li>
                <a href="#lantorrent-config">Configuration</a>
            </li>
        </ul>
    </li>
    
    <li>
        <p>
            <a href="#backfill-and-spot-instances">Backfill and spot instances</a>
        </p>
    </li>
    <li>
        <p>
            <a href="#libvirttemplate">libvirt template and virtio</a>
        </p>
    </li>



</ul>

<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<br />

<a name="confs"> </a>
<h2>Notes on conf files _NAMELINK(confs)</h2>

<p>
    The Nimbus conf files have many comments around each configuration. Check
    those out. Their content will be inlined here in the future.
</p>

<p>
    See the <tt class="literal">$NIMBUS_HOME/services/etc/nimbus/workspace-service</tt> directory.
</p>


<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<br />

<a name="cloud-overview"> </a>
<h2>Cloud configuration overview _NAMELINK(cloud-overview)</h2>

<p>
    The cloud configuration is a particular configuration of Nimbus that allows the cloud-client to
    operate out of the box. It is what is set up when you are done with the
    <a href="z2c/index.html">Zero To Cloud Guide</a>.
</p>

<p>
    This overview section and subsequent reference sections give a more in-depth explanation of how
    it works in order to provide context for administrators running Nimbus in production that want to
    customize settings or understand things in a more "under the hood" way for security/networking reasons.
</p>

<p>
    <i>This is all information for <b>deployers</b> of the cloud configuration to learn about it and customize it. This is
    <b>not necessary for cloud users</b> to read and understand.</i> If you
    are a cloud user just looking to understand how to launch and manage VMs
    on an existing cloud, start with the <a href="../clouds/">clouds</a> pages.
</p>

<p>
    In the Nimbus 2.6 release the repository service (<a href="#cumulus">Cumulus</a>) and the IaaS services
    MUST be on the same node. In future releases this restriction will
    be lifted.
</p>

<p>
    The server addresses must be directly reachable from the Internet
    or otherwise configured to deal with being NAT'd. The IaaS services container can be setup for NAT or other port
    forwarding situations. Cumulus should be NAT friendly so long as
    its listening port (default 8888) is forwarded through the NAT.
</p>

<p>
    <img src="../img/cloud-overview-rt.png" alt="workspace cloud configuration" />
</p>
<p>
    The diagram above depicts the basic setup.
</p>
<ul>
    <li>
        A special workspace client called the "cloud-client" invokes operations
        on the Iaas services and Cumulus server. A number of defaults are assumed which makes this work out of the box (these defaults will be discussed
        later).
    </li>
    <li>
        Files are transferred from the cloud-client to a client-specific
        storage system on the repository node (manual or other types of
        S3 protocol
        based transfers are also possible for advanced users).
    </li>
    <li>
        The service invokes commands on the VMMs to trigger file transfers
        to/from the repository node, VM lifecycle events, and destruction/clean
        up.
    </li>
    <li>
        If the workspace state changes, the cloud-client will reflect this to
        the screen (and log files) and depending on the change might also take
        action in response.
    </li>
</ul>




<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<br />

<a name="cloud-userexp"> </a>
<h2>Cloud configuration user experience _NAMELINK(cloud-userexp)</h2>

<p>
    Working backwards from the user's <tt class="literal">cloud-client</tt> experience is a
    good way to understand how the service needs to be setup.
</p>

<p>
    Here is an abbreviated depiction of a simple user interaction with a cloud,
    to give you an idea if you've never used it. This does not depict an
    image transfer to the repository node but that is similarly brief.
</p>

<ol>
    <li>
        <p>
            A grid credential is needed, there is an embedded
            <tt class="literal">grid-proxy-init</tt> program if that is necessary.
        </p>
    </li>

    <li>
        <p>
            You can list what's in your repository directory:
        </p>

<div class="screen">
<b>$</b> ./bin/cloud-client.sh --list<br>
</div>
        <p>
            Sample output:
        </p>

<div class="screen">
<pre>[Image] 'base-cluster-01.gz' Read only
        Modified: Jul 06 @ 17:34 Size: 578818017 bytes (~552 MB)

[Image] 'hello-cloud' Read only
        Modified: May 30 @ 14:16 Size: 524288000 bytes (~500 MB)

[Image] 'hello-cluster' Read only
        Modified: Jun 30 @ 20:18 Size: 524288000 bytes (~500 MB)</pre>
</div>


    </li>

    <li>
        <p>
            And pick one to run (ignore the 'cluster' images for now)
        </p>

<div class="screen">
<b>$</b> ./bin/cloud-client.sh --run --name hello-cloud --hours 1<br>
</div>
        <p>
            Sample output:
        </p>

<div class="screen"><pre>
SSH public keyfile contained tilde:
  - '~/.ssh/id_rsa.pub' --> '/home/guest/.ssh/id_rsa.pub'

Launching workspace.

Using workspace factory endpoint:
    https://cloudurl.edu:8443/wsrf/services/WorkspaceFactoryService

Creating workspace "vm-023"... done.

       IP address: 123.123.123.123
         Hostname: ahostname.cloudurl.edu
       Start time: Fri Feb 29 09:36:39 CST 2008
    Shutdown time: Fri Feb 29 10:36:39 CST 2008
 Termination time: Fri Feb 29 10:46:39 CST 2008

Waiting for updates.
</pre></div>

        <p>
            Some time elapses as the image file is copied to the VMM node. Then
            a running notification is printed:
        </p>

<div class="screen"><pre>
State changed: Running

Running: 'vm-023'
</pre></div>
    </li>

    <li>
        <p>
            The client had picked up your default public SSH key and sent it to
            be installed on the fly into the VM's
            <tt class="literal">authorized_keys</tt> policy
            for the root account. So after launching you can use the printed
            hostname to log in as root:
        </p>

        <div class="screen">
            <b>$</b> ssh root@ahostname.cloudurl.edu<br>
        </div>
    </li>
</ol>

<br />

<p>
    You can see an example of a cluster cloud-client deployment on the
    <a href="../clouds/clusters.html">one-click clusters</a> page.
</p>


<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<br />

<a name="cloud-assumptions"> </a>
<h2>Cloud configuration assumptions and defaults _NAMELINK(cloud-assumptions)</h2>

<p>
    A number of things go into making the cloud client work out of the box,
    but it is in large part accomplished by giving the user a downloadable
    package with a number of default configurations.
</p>

<p>
    These defaults limit functionality options in some cases, but that is the
    idea: eliminate decisions that need to be made and set working defaults.
    There are avenues left open for experienced users to do more
    (for example, by overriding the defaults or even switching over to the
    regular workspace client).
</p>

<p>
    In the previous section, the first thing that probably stands out is that
    there are no contact addresses being entered on the command line.
</p>

<p>
    The service and repository URLs are derived from a properties file
    that is included in the toplevel "conf" directory of the cloud-client
    package.
</p>

<p class="indent">
    <b>Note</b>: How properties files and commandline overrides work is covered
    in a <a href="#cloud-properties">later section</a> in detail, it is all designed to be
    flexible under the covers. If you don't want to follow the conventions
    laid out in this current "assumptions" section, it will be important to
    understand the later section to know how to change things for a good
    client package or properties file(s) that your users can use. Continue
    reading this section first, though, to get the basic ideas.
</p>

<p>
    There are three main groups of assumptions and defaults. The first is the
    contact and identity information of the workspace service
    (see above for configuration sample where this are specified).
    The other two groups make up the rest of this "Assumptions" section:
</p>

<ul>
    <li><a href="#cloud-personaldir">Deriving per-user repository directories</a></li>
    <li><a href="#cloud-runtime">Runtime assumptions</a></li>
</ul>


<a name="cloud-personaldir"> </a>
<h3>Deriving per-user repository directories _NAMELINK(cloud-personaldir)</h3>

<p>
    For Cumulus S3 based commands (like <i>--list</i>, <i>--delete</i>, and
    <i>--transfer</i>) the server to contact is based on the contact in
    the cloud properties file. The s3id and s3 secret to authenticate with
    the Cumulus server is stored in the
    cloud properties file.
</p>

<p>
    When you transfer a local file, the target of the transfer is the same
    filename in your personal repository directory. When you refer to the name
    of a workspace to run, this name must correspond to a filename in your
    personal repository directory.
</p>

<p>
    We know where the repository comes from but how is that directory derived?
</p>

<p>
    There are two other components to derive the directory used: the configured
    <i>base bucket property</i> and the <i>user S3 ID</i>.
</p>
<ul>
    <li>
        The configured <i>base directory property</i>. The default
        configuration for the base bucket on the repository node is
        "<b>Repo</b>". This value can be changed in the cloud.properties
        file so long as it matches the particular clouds setup. It is
        best to not alter this value.
    </li>
</ul>


<p class="indent">
    <i>
        There is a cloud-client option to input any name or local
        file path and see what the derived URL is. See the --extrahelp
        description of the --print-file-URL option.
    </i>
</p>

<a name="cloud-runtime"> </a>
<h3>Runtime assumptions _NAMELINK(cloud-runtime)</h3>

<p>
    The second set of assumptions to cover is how a given image file is going
    to actually work. There are many options that you can specify in regular
    workspace requests. For example, the memory size, the number of network
    interfaces to construct, the pool name(s) to lease network addresses from,
    and the partition name the VM is expecting for the base partition.
</p>

<p>
    Some fixed assumptions are made:
</p>
<ul>
    <li>There can be only <b>one network interface</b></li>
    <li>The network interface <b>is expecting its address via DHCP</b></li>
    <li>
        There can be only <i>one partition file</i>, for the root partition,
        configured with an ext2/ext3 filesystem. Other filesystems may not
        work correctly (this has to do with the cloud's default kernel as well
        as its ability to edit the image's files before boot).
    </li>
</ul>
<p>
    The rest of the launch request is filled by default configurations,
    here they are:
</p>
<ul>
    <li>Request <b>3584</b> MB of memory (this is usually overriden)</li>
    <li>Request networking address from a pool named <b>public</b></li>
    <li>Mount the partition to <b>sda1</b></li>
</ul>


<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->


<br />

<a name="cloud-necessary"> </a>
<h2>Cloud configuration, necessary configurations _NAMELINK(cloud-necessary)</h2>

<p>
    The previous section summed up the defaults and main assumptions. Opting
    to follow these conventions in your cloud leads to these configuration
    conclusions:
</p>

<ul>
    <li>
        <p>
            Install the workspace service in
            <a href="../admin/reference.html#resource-pool-and-pilot">resource
            pool mode</a>.
        </p>
    </li>
    <li>
        <p>
            Configure an
            <a href="../admin/quickstart.html#networks">network</a>
            for addresses to lease from and call it "<b>public</b>".
        </p>
    </li>
    <li>
        <p>
            Create a <i>cloud.properties</i> file for your cloud with
            the values in this <a href="../examples/cloud.properties">example
            file</a> changed to reflect the correct URLs and identities.
        </p>
        <p>
            It is best to distribute a unique cloud.properties for each user with the Cumulus credentials in the file already, this is easily set up when using the nimbus-new-user program and the Nimbus web application. See the <a href="#user-management">user management</a> section for details.
        </p>
    </li>
    <li>
        <p>
            If you need to adjust the default memory request, add a line of
            text like so to the <i>cloud.properties</i> file you will
            distribute: <b>vws.memory.request=256</b>
        </p>
    </li>
    <li>
        <p>
            Create a <i>Repo</i> base bucket on the repository node.
            This should be done by the Nimbus installer.
        </p>
    </li>

</ul>


<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->


<br />

<a name="cloud-properties"> </a>
<h2>Cloud configuration property reference _NAMELINK(cloud-properties)</h2>

<p>
    This section goes into more detail about the property file and commandline
    configurations. This is especially important to understand if you want
    to diverge from the defaults above.
</p>

<div>

    <img src="../img/cloud-client-layout.png"
         alt="workspace cloud client layout"
         class="floatright" />

    <p>
        All commands go through <i>cloud-client.sh</i> which in turn
        invokes the actual cloud client program. The cloud client is written
        in Java and installed at <i>lib/globus/lib/workspace_client.jar</i>.
    </p>
    <p>
        Before calling this program, the script sets up some things:
    </p>

    <ul>
        <li>
            <i>../conf/cloud.properties</i> is set as the user properties file
        </li>
        <li>
            <i>../lib/globus</i> becomes the new GLOBUS_LOCATION (overriding
            anything previously set)
        </li>
        <li>
            <i>../lib/certs</i> is set as a directory to add to the trusted
            X509 certificate directories for identity validations (the client
            verifies it is talking to the right servers). Adding the CA cert(s)
            of the workspace service certificates to this
            directory ensures that the user will not run into CA (trusted
            certificates) problems.
        </li>
    </ul>
<p>
    The cloud client program respects settings from <b>three</b> different
    places, listed here in the <b>order of precedence</b>:
</p>
<ul>
    <li>
        <p>
            <b>Commandline arguments</b> - If the client uses one of the
            optional flags listed in <i>./bin/cloud-client.sh --extrahelp</i>,
            these values are used. Many things can be overriden this way,
            including the service contacts.
        </p>
    </li>
    <li>
        <p>
            <b>User properties file</b> - An example of this is distributed with the cloud client.
        </p>
        <p>
            Note that you can include different properties files and have your
            users switch between clouds using
            <i>./bin/cloud-client.sh --conf ./conf/some-file</i>.
        </p>
        <p>
            If no <i>--conf</i> argument is supplied, the default file
            <i>cloud.properties</i> needs to exist. If you need to change
            this in your client distribution for cosmetic reasons, you can
            do so by editing the one relevant line at the top of
            <i>./bin/cloud-client.sh</i>
        </p>
    </li>
    <li>
        <p>
            <b>Embedded properties file</b> - A properties file lives inside
            the workspace client jar (which is installed into
            <i>lib/globus/lib/workspace_client.jar</i>). This controls all
            the remaining configurations.
        </p>
    </li>
</ul>
</div>

<p>
    There are (intentionally) <b>no fallback settings</b> for many of the properties, they will be included in the cloud.properties file you give to a user:
</p>

<ul>
    <li>ssh.pubkey (Path to SSH public key to log in with)</li>
    <li>vws.factory (Host+port of Virtal Workspace Service)</li>
    <li>vws.factory.identity (Virtal Workspace Service X509 identity)</li>
    <li>vws.repository (Host+port of cumulus image repository)</li>
    <li>vws.repository.type=cumulus (currently cumulus is the only supported repository protocol</li>
    <li>vws.repository.canonicalid (The users canonical ID)</li>
    <li>vws.repository.s3id (The users S3 ID)</li>
    <li>vws.repository.s3key (The users S3 secret key)</li>
    <li>vws.repository.s3bucket (The bucket where the system stores images)</li>
    <li>vws.repository.s3basekey (The basename of all images in the bucket)</li>
</ul>


<p>
    These are the embedded properties that are shipped with the cloud client,
    they can also exist in the cloud properties files to override the defaults:
</p>

<pre>
# Default ms between polls
vws.poll.interval=2000

# Default client behavior is to poll, not use asynchronous notifications
vws.usenotifications=false

# Default memory request
vws.memory.request=3584

# Image repository base directory (only used for older GridFTP based clouds)
vws.repository.basedir=/cloud/

# CA hash of target cloud (only used for advice in --security)
vws.cahash=6045a439

# propagation setup for cloud (only used for older GridFTP based clouds)
vws.propagation.scheme=scp
vws.propagation.keepport=false

# Metadata defaults
vws.metadata.association=public
vws.metadata.mountAs=sda1
vws.metadata.nicName=eth0
vws.metadata.cpuType=x86
vws.metadata.vmmType=Xen
vws.metadata.vmmVersion=3

# Filename defaults for history directory
vws.metadata.fileName=metadata.xml
vws.depreq.fileName=deprequest.xml
</pre>



<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->


<br />

<a name="user-management"> </a>
<h2>User management _NAMELINK(user-management)</h2>


<p>
    In order to manage Nimbus user more easily a set of command line tools
    have been created.
</p>

<UL>
    <LI>nimbus-new-user</LI>
    <LI>nimbus-list-users</LI>
    <LI>nimbus-edit-user</LI>
    <LI>nimbus-remove-user</LI>
</UL>

<p>
All of the tools take as a single mandatory argument an email address, with
the slight exception of <i>nimbus-list-users</i>. nimbus-list-users allows
an administrator to query for users on their systems therefore it takes
as an argument a query pattern. For example, if you wanted to look up
all the users with email addresses at gmail.com you would run:
</p>

<pre class="panel">
$ nimbus-list-users %@gmail.com
</pre>

<p>
All of the command line tools take the argument --help and further information
can be found there. For didactic purposes an example of a common session in which a
new user is created, changes, listed and removed follows:
</p>

<div class="screen">
<pre>
$ ./bin/nimbus-new-user user1@nimbus.test
cert : /home/bresnaha/NIM/var/ca/tmpm3s0Vccert/usercert.pem
key : /home/bresnaha/NIM/var/ca/tmpm3s0Vccert/userkey.pem
dn : /O=Auto/OU=a645a24d-6183-4bbd-9537-b7260749c716/CN=user1@nimbus.test
canonical id : aa55655a-8552-11df-a58d-001de0a80259
access id : p3BR1WQTpio8JShc8YD7S
access secret : LRY6lMgIFE5BioK5XRu7eZKecBDHjB35PVOqAmCLDm
url : None
web id : None
cloud properties : /home/bresnaha/NIM/var/ca/tmpm3s0Vccert/cloud.properties
$

$ ./bin/nimbus-edit-user -p NewPassWord user1@nimbus.test
dn : /O=Auto/OU=a645a24d-6183-4bbd-9537-b7260749c716/CN=user1@nimbus.test
canonical id : aa55655a-8552-11df-a58d-001de0a80259
access id : p3BR1WQTpio8JShc8YD7S
access secret : NewPassWord
$

$ ./bin/nimbus-list-users %@nimbus.test
dn : /O=Auto/OU=a645a24d-6183-4bbd-9537-b7260749c716/CN=user1@nimbus.test
canonical id : aa55655a-8552-11df-a58d-001de0a80259
access id : p3BR1WQTpio8JShc8YD7S
access secret : NewPassWord
display name : user1@nimbus.test
$

$ ./bin/nimbus-remove-user user1@nimbus.test
$
</pre>
</div>




<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->


<br />

<a name="group-authz"> </a>
<h2>Per-user rights and allocations _NAMELINK(group-authz)</h2>

<p>
    In the <i>services/etc/nimbus/workspace-service/group-authz/</i> directory are the default policies for each user. You pick a pre-configured policy to apply to a new user. The "groups" are not a shared allocation but rather each group is a policy that describes a "type" of user.
</p>

<p>
    Todo: describe what can be tracked on a per-user basis
</p>

<p>
    Todo: speak of nimbus-new-user integration
</p>




<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->


<br />

<a name="elastic"> </a>
<h2>Enabling the EC2 SOAP frontend _NAMELINK(elastic)</h2>
<p>
    After installing, the EC2 query frontend should be immediately
    operational. However if you wish you use the SOAP frontend as well,
    you must make a few configuration changes. To begin, see the
    <tt class="literal">$NIMBUS_HOME/services/etc/nimbus/elastic</tt>
    directory. The <tt class="literal">elastic.conf</tt> file here specifies
    what the EC2 "instance type"
    allocations should translate to and what networks should be requested
    from the underlying workspace service when VM create requests are sent.
</p>

<p>
    By default, a Nimbus installation will enable this service:
</p>

<div class="screen"><pre>
https://10.20.0.1:8443/wsrf/services/ElasticNimbusService
</pre></div>

<p>
    But before the service will work, you must adjust a container configuration.
    This will account for some security related customs for EC2:
</p>
<ul>
    <li>
        <p>
            Secure message is used, but only on the request. No secure message
            envelope is sent around EC2 responses, therefore EC2 clients do not
            expect such a response. It relies on the fact that <b>https</b>
            is being used to protect responses.
        </p>
        <p>
            Both integrity and encryption problems are relevant, be wary of any
            http endpoint being used with this protocol. For example, you
            probably want to make sure that add-keypair private key
            responses are encrypted (!).
        </p>
    </li>
    <li>
        <p>
            Also, adjusting the container configuration gets around a timestamp
            format incompatibility we discovered (the timestamp is normalized
            <i>after</i> the message envelope signature/integrity is confirmed).
        </p>
    </li>
</ul>

<p>
    There is a sample container <i>server-config.wsdd</i> configuration
    to compare against
    <a href="sample-server-config.wsdd-supporting-ec2.xml">here</a>.
</p>

<p>
    Edit the container deployment configuration:
</p>
_EXAMPLE_CMD_BEGIN
nano -w etc/globus_wsrf_core/server-config.wsdd
_EXAMPLE_CMD_END

<p>
    Find the <b>&lt;requestFlow&gt;</b> section and comment out the existing
    <i>WSSecurityHandler</i> and add this new one:
</p>

<div class="screen"><pre>
    &lt;handler type="java:org.globus.wsrf.handlers.JAXRPCHandler"&gt;

        &lt;!-- <b>enabled</b>: --&gt;
        &lt;parameter name="className"
                   value="org.nimbustools.messaging.gt4_0_elastic.rpc.WSSecurityHandler" /&gt;

        &lt;!-- <b>disabled</b>: --&gt;
        &lt;!--&lt;parameter name="className"
                   value="org.globus.wsrf.impl.security.authentication.wssec.WSSecurityHandler"/&gt; --&gt;
    &lt;/handler&gt;
</pre></div>


<p>
    Now find the <b>&lt;responseFlow&gt;</b> section and comment out the existing
    <i>X509SignHandler</i> and add this new one:
</p>

<div class="screen"><pre>
    &lt;handler type="java:org.apache.axis.handlers.JAXRPCHandler"&gt;

        &lt;!-- <b>enabled</b>: --&gt;
        &lt;parameter name="className"
                   value="org.nimbustools.messaging.gt4_0_elastic.rpc.SignHandler" /&gt;

        &lt;!-- <b>disabled</b>: --&gt;
        &lt;!--&lt;parameter name="className"
                       value="org.globus.wsrf.impl.security.authentication.securemsg.X509SignHandler"/&gt;--&gt;
    &lt;/handler&gt;</pre></div>

<p>
    If you don't make this configuration, you will see
    <a href="troubleshooting.html#elastic-timestamp-error">this error</a>
    when trying to use an EC2 client.
</p>
<p>
    Container restart required after the configuration change.
</p>

<div class="note">
<p class="note-title">KVM</p>
    If you are using KVM you need to adjust the default mountpoint, to support disk images.
    This is found in
    <tt class="literal">$NIMBUS_HOME/services/etc/nimbus/elastic/other/other-elastic.conf</tt>.
    Set <tt class="literal">rootfile.mountas=hda</tt>
</p>
</div>


<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->

<br />

<a name="query"> </a>
<h2>Configuring the EC2 Query frontend _NAMELINK(query)</h2>

<p>
The EC2 Query frontend supports the same operations as the SOAP frontend.
However, it does not run in the same container. It listens on HTTPS using Jetty.
Starting with Nimbus 2.4, the query frontend is enabled and listens on port 8444.
For instructions on changing this port, see the
<a href="#service-ports">service ports</a> section.
</p>

<p>
Configuration for the query frontend lives in the
    <tt class="literal">$NIMBUS_HOME/services/etc/nimbus/query/query.conf</tt>
    file.
</p>

<p>
The Query interface does not rely on X509 certificates for security. Instead,
it uses a symmetric signature-based approach. Each user is assigned an access
identifier and secret key. These credentials are also maintained by the service.
Each request is "signed" by the client by generating a hash of parts of the
request and attaching them. The service performs this same signature process
and compares its result with the one included in the request.
</p>

<!-- mention any safeguards against replay attacks? -->

<p>
    There is support for creating query credentials in the nimbus-new-user program, for more information see the <a href="#user-management">user management</a> section.
</p>

<p>
    There is support for distributing query tokens
    via the <a href="../faq.html#nimbusweb">Nimbus Web</a> application.
</p>

<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->

<br />

<a name="service-ports"> </a>
<h2>Changing the network ports of Nimbus and Cumulus <span class="namelink"><a href="#service-ports">(#)</a></span>
</h2>
<p>
    The assorted Nimbus and Cumulus services use several network ports. Each is configured with sensible
    defaults, but you may change them if needed. Below are instructions for changing the port of each service.
</p>

<h3>Nimbus core services (default 8443)</h3>
<p>
    Edit <tt class="literal">$NIMBUS_HOME/libexec/run-services.sh</tt> and change the
    <tt class="literal">PORT</tt> line to your desired port number. You must restart the service
    for changes to take effect.
</p>

<h3>Nimbus EC2 query frontend (default 8444)</h3>
<p>
    Edit <tt class="literal">$NIMBUS_HOME/services/etc/nimbus/query/query.conf</tt> and
    adjust the <tt class="literal">https.port</tt> line. You must restart the service
    for changes to take effect.
</p>

<h3>Cumulus (default 8888)</h3>
<p>
    The configuration options for Cumulus can be found in
    <tt class="literal">$NIMBUS_HOME/cumulus/etc/cumulus.ini</tt>.
    Under the heading <tt class="literal">[cb]</tt> there is a <tt class="literal">port</tt>
    entry. Change that value and restart Cumulus with the <tt class="literal">nimbusctl</tt>
    program.
</p>
<pre class="panel">
[cb]

installdir = &lt;nimbus home&gt;
port = 8888
hostname = &lt;hostname&gt;
</pre>

<p>
    <i>note:</i>
    If this change is made after you have distributed a cloud.properties
    file to users then you will need to instruct your cloud-client users
    to change the value <tt class="literal">vws.repository=&lt;hostname&gt;:&lt;port&gt;</tt>
    in their local cloud.properties file.

</p>

<h3>Nimbus Context Broker REST frontend (default 8446</h3>
<p>
    The context broker also has a RESTful HTTP interface which listens on a separate port.
    You can disable or change the port of this interface in a configuration file:
    <tt class="literal">$NIMBUS_HOME/services/etc/nimbus-context-broker/jndi-config.xml</tt>.
    Parameters under the <tt class="literal">rest</tt> resource control this interface.
    Restart the service for changes to take effect.
</p>

<h3>Nimbus Web application (default 1443)</h3>
<p>
    When enabled, the web application listens by default on port 1443. This and other configuration
    options are location in the <tt class="literal">$NIMBUS_HOME/web/nimbusweb.conf</tt>
    file. Changes to this file require a restart of the service.
</p>


<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->

<br />

<a name="nimbusweb-config"> </a>
<h2>Configuring the Nimbus Web interface <span class="namelink"><a href="#nimbusweb-config">(#)</a></span>
</h2>

<p>
    Starting with Nimbus 2.4, the Nimbus Web application is bundled with the service
    distribution but is disabled by default. To enable it, edit the
    <i>$NIMBUS_HOME/nimbus-setup.conf</i>
    file and change the value of <i>web.enabled</i> to <b>True</b>. Next you should run
    <i>nimbus-configure</i> to propagate the change. Now you can use <i>nimbusctl</i>
    to start/stop the web application.
</p>


<p>
    Once the web application has been configured, you can start to use it with the nimbus-new-user program (see the help output), see the <a href="#user-management">user management</a> section.
</p>

<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->

<br />

<a name="nimbusweb-usage"> </a>
<h2>Using the Nimbus Web interface <span class="namelink"><a href="#nimbusweb-usage">(#)</a></span>
</h2>

<p>
    The Nimbus web application provides basic facilities for distributing new X509 credentials, EC2 query tokens, and cloud.properties files to users.
    
</p>
<p>
    Previously this was a tedious process that was difficult to do in a secure way that was also user friendly. Nimbus Web allows an admin to upload credentials for a user and then send them a custom URL which invites them to create an account.
</p>
<p>
    Once the web application has been configured, you can start to use it with the nimbus-new-user program (see the help output, -W flag), which allows you to very quickly add a user to the system and get the URL to distribute in your welcome email. The <a href="#user-management">user management</a> tools provide machine parsable output options that make it easy to incorporate into scripts as well (perhaps you would like to go further and create those emails entirely programmatically).
</p>
<p>
    To get started, log into the web interface as a superuser and go to the
    <i>Administrative Panel</i>. This page has a section for creating users
    as well as viewing pending and existing users. The best way to do this is by using the nimbus-new-user tool, but this option is available for you to create accounts manually.
</p>

<p>
    If you go the manual route: fill in the appropriate fields and upload an X509 certificate and (passwordless) key for the user. Note that the application expects plain text files, so depending on your browser you may need to rename files to have a .txt extension before you can upload them. Once the new account is created, you will be provided with a custom URL. You must paste this URL into an email to the user along with usage instructions.
</p>

<p>
    When the user accesses the custom URL, they will be asked to create a password
    and login. Inside their account, they can download the certificate and key which
    were provided for them by the admin. Note that the design of the application attempts
    to maximize the security of the process, with several important features:
</p>
<ul>
    <li>
        <b>The URL token can only be successfully used once</b>. After a
        user creates a password and logs in, future attempts to access that URL will
        fail. This is to prevent someone from intercepting the URL and using it to
        access the user's credentials. If this happens, the real user will be unable
        to login and will (hopefully) contact the administrator immediately (there is a message urging them to do so).
    </li>
    <li>
        In the same spirit, <b>the URL token will expire</b> after a configurable number of
        hours (default: 12).
    </li>
    <li>
        <b>The user's X509 private key can be downloaded once and only once</b>.
        After this download occurs,
        the key will be deleted from the server altogether. In an ideal security system,
        no person or system will ever be in possession of a private key, except for the
        user/owner of the key itself. Because we don't follow this for the sake of usability,
        we attempt to minimize the time that the private key is in the web app database.
    </li>
    <li>
        When a URL token is accessed or a private key is downloaded, the time and IP address of
        this access is logged and displayed in the administrative panel.
    </li>
    <li>
        The nimbus-new-user tool can create a custom cloud.properties file to use with your cloud. This will have all the right configurations as well as the user's query credentials in the "vws.repository.s3id" and "vws.repository.s3key" fields for using Cumulus.
    </li>
</ul>
      

<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<br />

<a name="cert-pointing"> </a>
<h2>Configuring a different host certificate _NAMELINK(cert-pointing)</h2>

<p>
    The Nimbus installer creates a Certificate Authority which is used for
    (among other things) generating a host certificate for the various services.
    There are three files involved in your host certificate and they
    are all generated during the install by the <em>nimbus-configure</em> program.
    By default, these files are placed in "$NIMBUS_HOME/var/" but you can control
    their placement with properties in the "$NIMBUS_HOME/nimbus-setup.conf" file.
</p>
<ul>
    <li><b>hostcert.pem</b> - The host certificate.
        The certificate for the issuing CA must be in the Nimbus trusted-certs
        directory, in hashed format.
    </li>
    <li><b>hostkey.pem</b> - The private key. Must be unencrypted and readable
    by the Nimbus user.</li>
    <li><b>keystore.jks</b> - Some Nimbus services require this special Java
    Key Store format. The <em>nimbus-configure</em> program generates this
    file from the host cert and key. If you delete the file, it can
    be regenerated by running <em>nimbus-configure</em> again.</li>
</ul>
<p>
    To use a custom host certificate, you can delete (or relocate)
    these three files, copy in your own hostcert.pem and hostkey.pem, and run
    <em>nimbus-configure</em>, which will generate the keystore.
</p>
<div class="note">
<p class="note-title">CA Certs</p>
    It is important that the issuing CA cert is trusted by Nimbus (and any
    clients used to access the Nimbus services). This is done by placing the
    hashed form of the CA files in the trusted-certs directory, by default
    "$NIMBUS_HOME/var/ca/trusted-certs/". For example, these three files:
</p>
</div>
<div class="note">
<p class="note-title">Cumulus https</p>
    <b>NOTE:</b>
    If you are using Cumulus with https you will need to point it at the
    correct certificates as well. This is further explained
    <a href="#cumulus-https">here.</a>
</p>
</div>

<div class="screen"><pre>
3fc18087.0
3fc18087.r0
3fc18087.signing_policy
</pre></div>
<p>
    If you simply want to generate new host certificates using the Nimbus
    internal CA (perhaps using a different hostname), you can follow a
    similar procedure. Delete or relocate the hostcert.pem, hostkey.pem,
    and keystore.jks files and then run <em>nimbus-configure</em>. New
    files will be generated.
</p>

<p>
    You can also keep these files outside of the Nimbus install (for example if
    you use the same host certificate for multiple services on the same machine.
    Just edit the $NIMBUS_HOME/nimbus-setup.conf file and adjust the hostcert,
    hostkey, and keystore properties. Then run <em>nimbus-configure</em>. If these files
    do not exist, they will be created.
</p>

<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->

<br />

<a name="manual-nimbus-basics"> </a>
<h2>Configuring Nimbus basics manually without the auto-configuration program _NAMELINK(manual-nimbus-basics)</h2>

<p>
    What follows is the instructions for setting up a container as they existed
    before the auto-configuration program or the installer came into being (see
    <a href="quickstart.html#part-IIb">here</a> for
    information about the auto-configuration program). We are leaving in the docs because it provides some insight, especially for administrators that are preparing programmatic node configurations for their clusters (using systems such as Chef).
</p>

<hr />


<h4>* Service hostname:</h4>
<p>
    Navigate to the workspace-service configuration directory:
</p>

_EXAMPLE_CMD_BEGIN
cd $NIMBUS_HOME/services/etc/nimbus/workspace-service
_EXAMPLE_CMD_END


<p>
    Edit the "ssh.conf" file:
</p>

_EXAMPLE_CMD_BEGIN
nano -w ssh.conf
_EXAMPLE_CMD_END


<p>
    Find this setting:
</p>

<div class="screen"><pre>
service.sshd.contact.string=REPLACE_WITH_SERVICE_NODE_HOSTNAME:22</pre>
</div>

<p>
    ... and replace the CAPS part with your service node hostname. This
    hostname and port should be accessible from the VMM nodes.
</p>

<p>
    (The guide assumes you will have the same privileged account name on the
     service node and VMM nodes, but if not, this is where you would make the
     changes as you can read in the ssh.conf file comments).
</p>


<h4>* VMM names:</h4>

<p>
    See the <a href="#resource-pool">resource pool</a> section to learn how to add VMM names.
</p>

<a name="networks"> </a>
<h4>* Networks:</h4>
<p>
    Navigate to the workspace service networks directory:
</p>

_EXAMPLE_CMD_BEGIN
cd $NIMBUS_HOME/services/etc/nimbus/workspace-service/network-pools/
_EXAMPLE_CMD_END

<p>
    The service is packaged with two sample network files, <i>public</i> and
    <i>private</i>.
</p>
<p>
    You can name these files anything you want. The file names will be the
    names of the networks that are offered to clients. It's a convention to
    provide "public" and "private" but these can be anything. If you do change
    this, the cloud client configuration for what network(s) to request will
    need to be overriden in the cloud.properties file that you distribute to
    users.
</p>

<p>
    The <i>public</i> file has some comments in it. Edit this file to have
    the one DNS line at the top and one network address to give out. The
    subnet and network you choose should be something the VMM node can bridge
    to (there are some advanced configs to be able to do DHCP and bridge
    addresses for addresses that are foreign to the VMM, but this is not
    something addressed in this guide).
</p>

_EXAMPLE_CMD_BEGIN
nano -w public
_EXAMPLE_CMD_END

<div class="screen"><pre>
192.168.0.1
fakepub1 192.168.0.3 192.168.0.1 192.168.0.255 255.255.255.0</pre>
</div>

<p>
    It is possible to force specific MAC addresses for each IP address, see the file for syntax details. Usually the service will pick these for you from a pool of MAC addresses starting with a prefix that is configured in the "$NIMBUS_HOME/services/etc/nimbus/workspace-service/network.conf" file.
</p>



<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->

<br />


<a name="resource-pool-and-pilot"> </a>
<h2>Resource pool and pilot configurations _NAMELINK(resource-pool-and-pilot)</h2>
<p>
    There are modules for two resource management strategies currently
    distributed with Nimbus: the default "resource pool" mode and the "pilot"
    mode.
</p>
<p>
    The "resource pool" mode is where the service has direct control of a pool
    of VMM nodes. The service assumes it can start VMs
</p>
<p>
These are explained below. To learn about backfill and spot instances, see
<a href="#backfill-and-spot-instances">backfill and spot instances overview</a>
</p>
<p>
    The "pilot" mode is where the service makes a request to a cluster's
    Local Resource Management System (LRMS) such as PBS. The VMMs are equipped
    to run regular jobs in domain 0. But if pilot jobs are submitted, the nodes
    are secured for VM management for a certain time period by the workspace
    service. If the LRM or
    administrator preempts/kills the pilot job earlier than expected, the VMM
    is no longer available to the workspace service.
</p>
<p>
    The "services/etc/nimbus/workspace-service/other/<b>resource-locator-ACTIVE.xml</b>"
    file dictates what mode is in use (container restart required if this
    changes). See the available
    "services/etc/nimbus/workspace-service/other/<b>resource-locator-*</b>" files.
</p>

<a name="resource-pool"> </a>
<h3>Resource pool _NAMELINK(resource-pool)</h3>
<p>
    This is the default, see the <a href="#resource-pool-and-pilot">overview</a>.
</p>


<p>
    In the <a href="z2c/index.html">Zero to Cloud Guide</a>, the configuration script that you interacted with at end of the <a href="z2c/ssh-setup.html">SSH Setup</a> section took care of configuring the workspace service with the first VMM to use.
</p>

<p>
    A cloud with one VMM is perfectly reasonable for a test setup, but when it comes time to offer resources to others for real use, we bet you might want to add a few more. Maybe a few hundred more.
</p>

<p>
    As of Nimbus 2.6, it is possible to configure the running service dynamically. You can interact with the scheduler and add and remove nodes on the fly. The <tt class="literal">nimbus-nodes</tt> program is what you use to do this.
</p>

<p>
    Have a look at the help output:
</p>

<div class="screen"><pre>
cd $NIMBUS_HOME
./bin/nimbus-nodes -h</pre>
</div>

<p>
    The following example assumes you have homogenous nodes. Each node has, let's say, 8GB RAM and you want to dedicate the nodes exclusively to hosting VMs. Some RAM needs to be saved for the system (in Xen for example this is "domain 0" memory), so we decide to offer 7.5GB to VMs. For RAM, there is no overcommit possible with Nimbus.
</p>

<div class="note">
<p class="note-title">nimbus-nodes needs a running service</p>
    If the service is not running, the nimbus-nodes program will fail to adjust anything. Make sure the workspace service is running with "./bin/nimbusctl start".
</p>
</div>

<p>
    You can SSH to each node without password from the nimbus account, right?
</p>

<div class="screen"><pre>
service-node $ whoami
nimbus
service-node $ ssh nequals01</pre>
</div>

<div class="screen"><pre>
nequals01 $ ...</pre>
</div>

<p>
    The nodes in the cluster are named based on numbers, so for example "nequals01", "nequals02", etc. This means we can construct the command with a for loop.
</p>

<div class="screen"><pre>
$ NODES="nequals01"
$ for n in `seq -w 2 10`; do NODES="$NODES,nequals$n"; done
$ echo $NODES
nequals01,nequals02,nequals03,nequals04,nequals05,nequals06,nequals07,nequals08,nequals09,nequals10
</pre>
</div>

<p>
    With the <tt class="literal">$NODES</tt> variable in hand, we can make the node-addition call.
</p>

<div class="screen"><pre>
$ ./bin/nimbus-nodes --add $NODES --memory 7680
</pre>
</div>

<p>
    At any time you can use the "--list" action to see what the current state of the pool is.
</p>

<p>
    There are several other options discussed in the <tt class="literal">nimbus-nodes -h</tt> text, we will highlight one of the most important ones here.
</p>

<p>
    If you ever want to disable a VMM, use the live-update feature. After running the following command, no new VMs can be launched on the node. Any current VMs, however, will continue running. So this is a way to "drain" your nodes of work if there is maintenance coming up, etc.
</p>

<div class="screen"><pre>
$ ./bin/nimbus-nodes --update nequals08 --inactive
</pre>
</div>


<a name="pilot"> </a>
<h3>Pilot _NAMELINK(pilot)</h3>
<p>
    
</p>



<ol>
    <li>
        <p>
            The first step to switching to the pilot based infrastructure
            is to make sure you have at least one working node configured
            with workspace-control, following the instructions in this
            guide as if you were not going to use dynamically allocated
            VMMs via the pilot.
        </p>

        <p>
            If the only nodes available are in the LRM pool, it would be best
            to drain the jobs from one and take it offline while you confirm
            the setup.
        </p>
    </li>

    <li>
        <p>
            Next, make sure that the system account the container is
            running in can submit jobs to the LRM. For example, run
            <b>echo "/bin/true" | qsub</b>
        </p>
    </li>

    <li>
        <p>
            Next, decide how you would like to organize the cluster nodes,
            such that the request for time on the nodes from the workspace
            service in fact makes it end up with usable VMM nodes.
        </p>
        <p>
            For example, if there are only a portion of nodes configured
            with Xen and workspace-control, you can set up a special node
            property (e.g. 'xen') or perhaps a septe queue or server.
            The service supports submitting jobs with node property
            requirements and also supports the full Torque/PBS
            '[queue][@server]' destination syntax if desired.
        </p>
    </li>

    <li>
        <p>
            Copy the
            "services/etc/nimbus/workspace-service/other/<b>resource-locator-pilot.xml</b>"
            to
            "services/etc/nimbus/workspace-service/other/<b>resource-locator-ACTIVE.xml</b>"
        </p>

        <p>
            The configuration comments in "services/etc/nimbus/workspace-service/<b>pilot.conf</b>"
            should be self explanatory. There are a few to highlight here
            (and note that advanced configs are in <b>resource-locator-ACTIVE.xml</b>).
        </p>

        <ul>
            <li>
                <p>
                    HTTP digest access authentication based notifications
                    is a mechanism for pilot notifications. Each message
                    from a pilot process to the workspace service takes
                    on the order of 10ms on our current testbed which is
                    reasonable.
                </p>

                <p>
                    The <b>contactPort</b> setting is used
                    to control what port the embedded HTTP server listens
                    on. It is also the contact URL passed to the pilot
                    program, an easy way to get this right is to use an
                    IP address rather than a hostname.
                </p>

                <p>
                    Note the <b>accountsPath</b> setting.
                    Navigate to that file ("services/etc/nimbus/workspace_service/<b>pilot-authz.conf</b>"
                    by default) and change the shared secret to something
                    not dictionary based and 15 or more characters.
                    A script in that directory will produce suggestions.
                </p>

                <p>
                    This port may be blocked off entirely from WAN access
                    via firewall if desired, only the pilot programs need
                    to connect to it. If it is not blocked off, the use
                    of HTTP digest access authentication for connections
                    is still guarding access.
                </p>

                <p>
                    Alternatively, you can configure only SSH for these
                    notifications as well as configure both and use SSH as
                    a fallback mechanism.
                    When used as a fallback mechanism, the pilot will try to
                    contact the HTTP server and if that fails will then
                    attempt to use SSH. Those message are written to a
                    file and will be read when the workspace service
                    recovers. This is an advanced configuration, setting
                    up the infrastructure without this configured is recommended
                    for the first pass (reduce your misconfiguration chances).
                </p>

            </li>

            <li>
                <p>
                    The <b>maxMB</b> setting is used to
                    set a hard maximum memory allotment across all workspace
                    requests (no matter what the authorization layers
                    allow). This a "fail fast" setting, making sure
                    dubious requests are not sent to the LRM.
                </p>
                <p>
                    To arrive at that number, you must arrive at the
                    maximum amount of memory
                    to give domain 0 in non-hosting mode. This should be
                    as much as possible and you will also configure this later
                    into the pilot program settings (the pilot will make sure domain
                    0 gets this memory back when returning the node from hosting
                    mode to normal job mode).
                </p>
                <p>
                    When the node boots and xend is first run, you should
                    configure things such that domain 0 is already at this
                    memory setting. This way, it will be ready to give
                    jobs as many resources as possible from its initial
                    boot state.
                </p>
                <p>
                    Domain 0's memory is set in the boot pmeters.
                    On the "kernel" line you can add a parameter like this:
                    <b>dom0_mem=2007M</b>
                </p>
                <p>
                    If it is too high you will make the node unbootable,
                    2007M is an example from a 2048M node and was arrived
                    at experimentally. We are working on ways to
                    automatically figure out the highest number
                    this can be without causing boot issues.
                </p>
                <p>
                    Take this setting and subtract at least 128M from it,
                    allocating the rest for guest workspaces. Let's label
                    128M in this example as <b>dom0-min</b>
                    and 2007 as <b>dom0-max</b>. Some memory
                    is necessary for domain 0 to at least do privileged
                    disk and net I/O for guest domains.
                </p>
                <p>
                    These two memory setting will be configured into the
                    pilot to make sure domain 0 is always in the correct
                    state. Domain 0's memory will never be set below the
                    <b>dom0-min</b> setting and will always
                    be returned to the <b>dom0-max</b> when
                    the pilot program vacates the node.
                </p>
                <p>
                    Instead of letting the workspace request fail on the
                    backend just before instantiation, the
                    <b>maxMB</b> setting is configured in
                    the service so b requests for more memory will be
                    rejected up front.
                </p>
                <p>
                    So [ <b>dom0-max</b> minus
                    <b>dom0-min</b> equals
                    <b>maxMB</b> ]. And again
                    <b>maxMB</b> is the maximum allowed for
                    guest workspaces.
                </p>
                <p>
                    ( You could make it smaller. But it would not make
                    sense to make it bigger than
                    [ <b>dom0-max</b> minus
                    <b>dom0-min</b> ] because this will
                    cause the pilot program itself to reject the request. )
                </p>
            </li>

            <li>
                <p>
                    The <b>pilotPath</b> setting must be
                    gotten right and double checked.
                    See
                    <a href="http://bugzilla.globus.org/bugzilla/show_bug.cgi?id=5869">this
                    bugzilla item</a>
                </p>
            </li>

        </ul>

    </li>

    <li>
        <p>
            Next, note your pilotPath setting and put a copy of
            <b>workspacepilot.py</b> there. Run
            <b>chmod +x</b> on it and that is all
            that should be necessary for the installation.
        </p>
        <p>
            Python 2.3 or higher (though not Python 3.x) is also required
            but this was required for workspace-control as well.
        </p>
        <p>
            A sudo rule to the xm program is also required but this was
            configured when you set up workspace-control. If the account
            the pilot jobs are run under is different than the account that
            runs workspace-control, copy the xm sudo rule for the account.
        </p>
    </li>

    <li>
        <p>
            Open the <b>workspacepilot.py</b> file in an
            editor. These things must be configured correctly and require
            your intervention (i.e., the software cannot guess at them):
        </p>

        <ul>

            <li>
                Search for "<b>secret: pw_here</b>" around
                line 80. Replace "pw_here" with the shared secret you
                configured above.
            </li>

            <li>
                Below that, set the "minmem" setting to the value you chose
                above that we called <b>dom0-min</b>.
            </li>

            <li>
                Set the "dom0_mem" setting to the value you chose above
                that we called <b>dom0-max</b>.
            </li>

        </ul>

        <p>
            The other configurations should be explained enough in the
            comments and they also usually do not need to be altered.
        </p>
        <p>
            You might like to create a directory for the pilot's logfiles
            instead
            of the default setting of "/tmp" for the "logfiledir"
            configuration. You might also wish to septe out the config
            file from the program. The easiest way to do that is
            to configure the service to call a shell script instead of
            workspacepiloy.py. This in turn could wrap the call to the
            pilot program, for example:
            <b>"/opt/workspacepilot.py -p /etc/workspace-pilot.conf $@"</b>
        </p>

    </li>

    <li>
        Now restart the GT container and submit test workspace requests as
        usual (cloud requests work too).
    </li>

</ol>









<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->


<br />

<a name="backend-config-invm-networking"> </a>
<h2>Network configuration details _NAMELINK(backend-config-invm-networking)</h2>
<p>
    While addresses for VMs are configured and chosen within the Nimbus service,
    they are physically queried via an external DHCPd service. There are two ways
    of arranging the DHCP configuration.
</p>
<ol>
    <li>
        Centralized -- a new or existing DHCP service that you configure with Nimbus-specific
        MAC to IP mappings. This is generally simpler to set up and is covered in the
        <a href="z2c/networking-setup.html">Zero-to-Cloud guide</a>.
    </li>
    <li>
        Local -- a DHCP server is installed on every VMM node and automatically configured
        with the appropriate addresses just before a VM boots. This is more complicated to
        set up initially but can be preferable in certain scenarios.
    </li>
</ol>

<p>
    Because Nimbus chooses the MAC address, it controls which DHCP entry will be
    retrieved by the VM. Additionally, ebtables rules are configured to ensure that
    a malicious or misconfigured VM cannot use another MAC or IP.
</p>

<p>
    In a local DHCP scenario, <tt class="literal">workspace-control</tt> on each VMM
    manages the DHCP configuration file and injects entries just before each VM boots.
    To prevent DHCP broadcast requests from getting out to the LAN, an ebtables rule is
    enacted to force packets to a specific local interface.
</p>

<p>
    Configuring local DHCP is not difficult, but you should exercise caution to
    ensure that the DHCP daemons on each VMM do not interfere with other networks.
    First of all, you must install an ISC-compatible DHCP server. This should be
    available on all Linux distributions.
</p>
<p>
    Once installed, find the DHCP configuration location. Typically this is something
    like <tt class="literal">/etc/dhcp/dhcpd.conf</tt> or
    <tt class="literal">/etc/dhcp3/dhcpd.conf</tt>. Replace this file with the example
    in the workspace-control package:
    <tt class="literal">share/workspace-control/dhcp.conf.example</tt> and then edit
    it to include proper subnet declarations for your network. Afterwards, try
    restarting DHCP and checking logs to ensure that it started without error.
</p>
<p>
    Next, edit the <tt class="literal">networks.conf</tt> file in
    <tt class="literal">etc/workspace-control/</tt>. Set the
    <tt class="literal">localdhcp</tt> option to <tt class="literal">true</tt>
    and take a look at the <tt class="literal">dhcp-bridges</tt> section to configure
    where DHCP packets are bridged to.
</p>

<p>
    Finally, you may need to edit the sudo script that workspace-control uses to
    alter <tt class="literal">dhcp.conf</tt> and restart the service. This script
    is located at <tt class="literal">libexec/workspace-control/dhcp-config.sh</tt>.
    It expects the following defaults:
</p>

<pre class="panel">
# Policy file for script to adjust
DHCPD_CONF="/etc/dhcpd.conf"

# Command to run before policy adjustment
DHCPD_STOP="/etc/init.d/dhcpd stop"

# Command to run after policy adjustment
DHCPD_START="/etc/init.d/dhcpd start"
</pre>

<p>
    You should also ensure that this script can be called via sudo as the
    <tt class="literal">nimbus</tt> user.
</p>

<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->

<br />


<a name="context-broker-standalone"> </a>
<h2>Configuring a standalone context broker _NAMELINK(context-broker)</h2>

<p>
    The <a href="../faq.html#ctxbroker">context broker</a> is used to
    facilitate <a href="../clouds/clusters.html">one click clusters</a>.
</p>
<p>
    The context broker
    is installed and configured automatically starting with Nimbus 2.4, but there
    is not a dependency on any Nimbus service component. It can run by itself in a
    GT container. You can use it for deploying virtual clusters on EC2 for
    example without any other Nimbus service running (the cloud client #11 has
    an "ec2script" option that will allow you to do this).
</p>
<p>
    If you want to install the broker separately from Nimbus, download the
    Nimbus source tarball, extract it, and run
    <i>scripts/gt/broker-build-and-install.sh</i> with an appropriate
    <b>$GLOBUS_LOCATION</b> set.
</p>
<p>
    To set up a standalone broker that is compatible with post-010 cloud clients,
    follow these steps:
</p>
<ol>
    <li>
        <p>
            Create a passwordless CA certificate.
        </p>

        <p>
            You can do this from an existing CA. To unencrypt an RSA key, run:
            <i>openssl rsa -in cakey.pem -out cakey-unencrypted.pem</i>
        </p>
        <p>
            Alternatively, you can use the CA created by the Nimbus installer
            under $NIMBUS_HOME/var/ca
        </p>
    </li>
    <li>
        <p>
            Make very sure that the CA certificate and key files are read-only
            and private to the container running account.
        </p>
    </li>
    <li>
        <p>
            Add the CA certificate to your container's trusted certificates
            directory. The context broker (running in the container) creates
            short term credentials on the fly for the VMs. The VMs use this
            to contact the broker: the container needs to be able to verify
            who is calling.
        </p>
    </li>
    <li>
        <p>
           Navigate to "<i>$NIMBUS_HOME/services/etc/nimbus-context-broker</i>"
           and adjust the "caCertPath" and "caKeyPath" parameters in the
           "<i>jndi-config.xml</i>" file to point to the CA certificate
            and key files you created in previous steps.
        </p>
        <p>
            Note that the old and new context brokers can both use the same
            CA certificate and key file.
        </p>
    </li>
    <li>
        <p>
            Container restart is required.
        </p>
    </li>
</ol>
        
<a name="cumulus"> </a>
<h2>Cumulus _NAMELINK(cumulus)</h2>
<p>
    <a href="../faq.html#cumulus">Cumulus</a> is the S3 compliant repository
    management service for Nimbus.
</p>
<p>
Cumulus is an open source implementation of the Amazon S3 REST API. It
is packaged with the Nimbus
however it can be used without nimbus as well. Cumulus allows
you to server files to users via a known and adopted REST API. Your
clients will be able to access your data service with the
Amazon S3 clients they already use.

</p>

<a name="cumulus-config"> </a>
<h3>Cumulus Configuration _NAMELINK(cumulus-config)</h3>
<p>
When the Cumulus server is run it expects to find a configuration file (typically
called cumulus.ini) in one or all of the following locations:
</p>
    <ol>
        <li>/etc/nimbus/cumulus.ini</li>
        <li>~/.nimbus/cumulus.ini</li>
        <li>the same directory from which the program was launched</li>
        <li>file pointed to by the environment variable CUMULUS_SETTINGS_FILE</li>
    </ol>
<div class="note">
<p class="note-title">cumulus.ini in Nimbus</p>
For Nimbus installations this file can be found at
<tt class="literal">$NIMBUS_HOME/cumulus/etc/cumulus.ini</tt>
</p>
</div>

<p>
Each file in the path is read in (provided it exists). The values found
in each file override the values found in the previous file in this list.

</p>


<a name="repository-location"> </a>
<h3>Repository Location _NAMELINK(repository-location)</h3>
<p>
The backend storage system in Cumulus has been created with a modular
interface that will allow us to add more
sophisticated plugins in the future. Thus giving the administrator
many powerful options.
In the current implementation there is a single storage module which stores
user files a mounted file system.
The reliability and performance of Cumulus will thus be
limited by the reliability and performance of that file system. Because
of this Cumulus administrators will often want to specify a location for
the repository.
</p>
<p>
Within the cumulus.ini file there is the [posix]:directory directive.
This is the directory in which all of the files in the Cumulus repository
will be stored. The names of the files in that directory will be
obfuscated based on the bucket/key name. In order to discover what file
belongs to what bucket/key you must use the user management tools (included
with the Cumulus installation)
There are a series of tools under the bin
directory which start with nimbusauthz-* that can help with this. In most
cases there will be no need for a system administrator to use these
tools and they are provided for expert usage for problematic situations.
</p>

<a name="boto"> </a>
<h3>Using the boto client _NAMELINK(boto)</h3>
<p>
To use boto it is important to disable virtual host based buckets and
to point the client at the right server. here is example code that
will instantiate a boto S3Connection for use with CB:
</p>
<pre class="panel">
    cf = OrdinaryCallingFormat()
    hostname = "somehost.com"
    conn = S3Connection(id, pw, host=hostname, port=80, is_secure=False, calling_format=cf)
</pre>

<a name="s3cmd"> </a>
<h3>Using the s3cmd client _NAMELINK(s3cmd)</h3>
<p>
Once you have the s3cmd successfully installed and configured you must
modify the file: $HOME/.s3cfg in order to direct it at this server.
Make sure the following key value pairs reflect the following changes:
</p>
<pre class="panel">
    access_key = &lt;access id&gt;
    secret_key = &lt;access secret&gt;
    host_base = &lt;hostname of service&gt;
    host_bucket = &lt;hostname of server&gt;
    use_https = False
</pre>
<a name="cumulus-https"> </a>
<h3>Using HTTPS with Cumulus _NAMELINK(cumulus-https)</h3>
<p>
In order to use a secure https connection with cumulus you must edit the
cumulus.ini file and provide it with a certificate and key pair. In
a typical nimbus installation these are generated for you and placed at:

<tt class="literal">$NIMBUS_HOME/var/hostcert.pem</tt>
<tt class="literal">$NIMBUS_HOME/var/hostkey.pem</tt>

To add them to the cumulus.ini file add the following lines:
<pre class="panel">
[https]
enabled=True
key=/home/nimbus/var/hostkey.pem
cert=/home/nimbus/var/hostcert.pem
</pre>

<a name="cumulus-quotas"> </a>
<h3>Disk Usage Quotas _NAMELINK(cumulus-quotas)</h3>
<p>
Cumulus allows administrators to set disk space limits on a per user basis.
By default users are created with unlimited space. To set a disk quota
limit use the program <tt class="literal">NIMBUS_HOME/ve/bin/cumulus-quota</tt>
Here is an example that will set the user user1@nimbusproject.org to a
100 byte limit:
</p>
<pre class="panel">
$ ./ve/bin/cumulus-quota user1@nimbusproject.org 100
$ ./ve/bin/cumulus-list-users user1@nimbusproject.org
friendly : user1@nimbusproject.org
ID : Ar2yXcfdhImjMNeWGUHJZ
password : ddOWFSC5rol9L6Tk14hA0QeS7valQdy38xeVvkFZwq
quota : 100
canonical id : 21161ebe-862a-11df-a9ca-001de0a80259
</pre>

<a name="cumulusnimbusconfig"> </a>
<h3>Configuring Cumulus Options in Nimbus _NAMELINK(cumulusnimbusconfig)</h3>

<p>

There are a few variables that Nimbus relies on to find information
about its co-located Cumulus server. These variables are found in
the file:
<tt class="literal">./services/etc/nimbus/workspace-service/cumulus.conf</tt>.
They are normally written automatically by the nimbus-configure program
but if an administrator makes some manual changes to their Nimbus installation
some of these variables may been to be changed as well.

<ul>
    <li><tt class="literal">cumulus.authz.db</tt></li>
    The cumulus authz database. By default it is an sqlite database and
    is located at $NIMBUS_HOME/cumulus/etc/authz.db

    <li><tt class="literal">cumulus.repo.dir</tt></li>
    The location of the cumulus posix backend file repository. By default
    this is $NIMBUS_HOME/cumulus/posixdata. Quite often users will want
    to change this to a more favorable location. Likely one with more
    disk space or faster disks.

    <li><tt class="literal">cumulus.repo.bucket</tt></li>
    The cumulus bucket in which all cloud client images are stored.

    <li><tt class="literal">cumulus.repo.prefix</tt></li>
    The prefix iwith which all image names are prepended.

</ul>

<a name="lantorrent"> </a>
<h2>LANTorrent _NAMELINK(lantorrent)</h2>
<p>
    <a href="../faq.html#lantorrent">LANTorrent</a> is fast multicast file
distribution protocol designed to saturate all the links in a switch.
There are several optimizations planned for future releases of LANTorrent.
</p>
<p>
LANTorrent works best for the following scenarios:
    <ol>
        <li>large file transfers (VM images are typically measured in gigabytes)</li>
        <li>local area switched network (typical for data center computer racks)</li>
        <li>file recipients are willing peers.</li>
            Unlike other peer to peer transfer protocol, bittorrent for
            example, LANTorrent is not designed with leeches in mind. It
            is designed under the assumption that every peer is a willing
            and able participant.
        <li>many endpoints request the same file at roughly the same time</li>
    </ol>
</p>

<a name="lantorrent-protocol"> </a>
<h3>LANTorrent Protocol _NAMELINK(lantorrent-protocol)</h3>
<p>
When an endpoint wants a file it submits a request to a central agent.
This agent aggregates request for files so that they can be sent out in
an efficient single multi-cast session. Each request for a source file
is stored until either N request on that file have been made or N'
seconds have passed since the last request on that source file has been
made. This allows for a user to request a single file in several
unrelated session yet still have the file transfered in an efficient
multi-cast session.
</p>
<p>
Once N requests for a given source file have been made or N' seconds
have passed the destination set for the source file is determined. A
chain of destination endpoints is formed such that each node receives
from and sends to one other node. The first node receives from the
repository and send to a peer node, that peer node sends to another,
and so on until all receive the file. In this way all links of the
switch are utilized to send directly to another endpoint in the switch.
This results in the most efficient transfer on a LAN switched network.
</p>
<p>
Often times in a IaaS system a single network endpoint (VMM) will want
multiple copies of the same file. Each file is booted as a virtual
machine and that virtual machine will make distinct changes to that file
as it runs, thus it needs it own copy of the file. However that file
does not need to be transfered across the network more than once.
LANTorrent will send the file to each endpoint once and instruct that
endpoint to write it to multiple files if needed.
</p>


<a name="lantorrent-config"> </a>
<h3>LANTorrent Configuration _NAMELINK(lantorrent-config)</h3>
<p>
LANTorrent is not enabled in a default Nimbus installation. A few
additional steps are required to enable it.
</p>
<p>
The following software is required on both service and VMM nodes:
    <ol>
        <li>Python 2.5+ (but not Python 3.x)</li>
        <li>Python <tt class="literal">simplejson</tt> module</li>
    </ol>
</p>

<p>
Lantorrent is run out of <tt class="literal">xinetd</tt> thus it must also be installed on all VMMs.
</p>

<p>
To install LANTorrent you must take the following steps:

<ol>
<li>
    <p>
        Edit <tt class="literal">$NIMBUS_HOME/nimbus-setup.conf</tt> and enable LANTorrent:
    </p>
    <pre class="panel">lantorrent.enabled: True</pre>
</li>

<li>
    <p>
        Edit <tt class="literal">$NIMBUS_HOME/services/etc/nimbus/workspace-service/other/common.conf</tt>
        and change the value of <tt class="literal">propagate.extraargs</tt>:
    </p>
    <pre class="panel">propagate.extraargs=$NIMBUS_HOME/lantorrent/bin/lt-request.sh</pre>

    <p>
    Be sure to expand <tt class="literal">$NIMBUS_HOME</tt> to its full and actual path.
    </p>
</li>

<li>
    <p>Install LANTorrent on VMM</p>
    <ul>
        <li>Copy lantorrent to the VMM nodes</li>
            From the source distribution of Nimbus copy the lantorrent
            directory to all VMM nodes.
        <li>optional: You may want to setup a python virtual environment for the installation</li>
        <li>Run <tt class="literal">python setup-vmm.py install</tt> on each node.
            This will output the xinetd file that you will need to install
            into /etc/xinetd. Note the <i>user</i> value in the output:
            this should be the same user as your workspace control user.
        </li>
    </ul>
</li>

<li>
    <p>Install LANTorrent into xinetd</p>
    <p>
    The <tt class="literal">setup-vmm.py</tt> script outputs the need xinetd <tt class="literal">lantorrent</tt> file. For example:
    <pre class="panel">
    ============== START WITH THE NEXT LINE ==================
    service lantorrent
    {
        type = UNLISTED
        disable = no
        socket_type = stream
        protocol = tcp
        user = bresnaha
        wait = no
        port = 2893
        server = /home/bresnaha/lt1/bin/ltserver
    }
    =============== END WITH THE PREVIOUS LINE =================
    </pre>
    This output must be copied into <tt class="literal">/etc/xinetd.d/lantorrent</tt>. Once done,
    restart xinetd.
    </p>
     <pre class="panel">
     # /etc/init.d/xinetd restart
     </pre>
</li>
<li>
    <p>Change the propagation method.</p>
    <p>Edit the file:
        <tt class="literal">$NIMBUS_HOME/services/etc/nimbus/workspace-service/other/authz-callout-ACTIVE.xml</tt>
        and change:
    </p>
    <pre class="panel">&lt;property name="repoScheme" value="scp" /&gt;</pre>
    to:
    <pre class="panel">&lt;property name="repoScheme" value="lantorrent" /&gt;</pre>
</li>
<li>Restart the service:
<pre class="panel">
$ $NIMBUS_HOME/bin/nimbusctl restart
</pre>
</li>

<li>
    <p>
        [optional] If the path to nimbus on the workspace-control nodes (VMMs)
    is not <tt class="literal">/opt/nimbus</tt> you will also need to edit a configuration file on
    all backend nodes.
    </p>

    <p>
    In the file: <tt class="literal">&lt;workspace-control path&gt;/etc/workspace-control/propagation.conf</tt>
    Adjust the value of:

    <pre class="panel">
    lantorrentexe: /opt/nimbus/bin/ltclient.sh
    </pre>

    <p>to point to the proper location of your <tt class="literal">ltclient.sh</tt> script. This should
    be a simple matter of changing <tt class="literal">/opt/nimbus</tt> to the path where you chose
    to install workspace control.
    </p>
</li>
</ol>


<!-- *********************************************************************** -->
<!-- *********************************************************************** -->
<!-- *********************************************************************** -->

<br />

<a name="backfill-and-spot-instances"> </a>
<h2>Backfill and Spot Instances _NAMELINK(backfill-and-spot-instances)</h2>
<p>
    Backfill and Spot Instances are two related features, they both deal with
    <i>asynchronous</i> instance requests, requests that may only be started
    at appropriate times, if ever.
</p>
<p>
Spot instances are requested by remote users with a particular bid
(represented in Nimbus as a discount applied to minutes charged to your
account). Users may consult the spot price history before bidding. If the
bid is accepted (equal or higher to the current spot price), the instances
are started. They may be stopped at a moment's notice. The implementation
is of EC2's 2010-08-31 WSDL, see the
<a href="http://aws.amazon.com/ec2/spot-instances/">Amazon EC2 Spot
Instances</a> guide for more background.
</p>
<p>
Backfill is a mechanism that the <i>administrator</i> configures to keep
idle resources busy. You pick a particular VM image that will be launched
when the nodes would otherwise be idle. This works nicely with systems
such as <a href="http://www.cs.wisc.edu/condor/">Condor</a> that can
gracefully deal with being preempted.
</p>
<p>
    To jump to the precise semantics and configurations that are possible, see
    the comments in the
    <a href="https://github.com/nimbusproject/nimbus/blob/HEAD/service/service/java/source/etc/workspace-service/async.conf">async.conf</a>
    file.
</p>
<p>
    To start using spot instances as a user, follow the
    <a href="../elclients.html#spot">spot instance user's guide</a>.
</p>
<p>
    To begin with backfill, read over the above conf file first. Choose a
    current administrator account to launch the image from or create one with
    the "--dn" flag like so:
</p>

<div class="screen">
<b>$</b> ./bin/nimbus-new-user --dn BACKFILL-SUPERUSER backfill@localhost<br>
</div>

<p>
    ... where "BACKFILL-SUPERUSER" is the user configured in async.conf (that
    is the default value).
</p>
<p>
    Backfill responds <i>immediately</i> to changes in the resource pool like
    the presence of higher priority requests (which includes spot instances if
    they are configured, as well as regular requests of course). This also
    includes when the resource pool is fundamentally changed by the
    <tt class="literal">nimbus-nodes</tt> program. When you add and remove
    nodes, you are changing the overall capacity which is a critical piece
    of information for mapping asynchronous requests like backfill and spot
    instances.
</p>
<p>
    A ramifications of that is that, since adding nodes is always done when the service is running, having backfill enabled while you make adjustments might
    get in your way during this period. It can be easier to disable backfill,
    make node adjustments, and then re-enable backfill. This applies especially
    to <i>removing</i> nodes since the <tt class="literal">nimbus-nodes</tt>
    program does not allow nodes to be removed that have instances running on
    them.
</p>
<p>
    This could be said for spot instances as well. It is slightly trickier with
    spot instances because they are remote user requests and you may not want to
    disable things abruptly for people. But on the other hand, they are spot
    instances so they should be able to deal with sudden terminations.
</p>
<p>
    Another thing to watch out for is using the "max.instances=0" configuration.
    Zero means as many as possible, but this maximum number is only calculated
    at service startup time. So if you add a bunch of new nodes to the resource
    pool and the backfill instances do not immediately start consuming them
    greedily, this is what is happening. After adding all those nodes, stop and
    then start the service and the backfill configuration will recalibrate.
</p>

<a href="#libvirttemplate">libvirt template and virtio</a>
<a name="libvirttemplate"> </a>
<h2>libvirt template and virtio</h2>
<p>
The submission of VMs to the hypervisor (with Xen or KVM) is done by Nimbus
via libvirt. Then submitting to libvirt the VM run request is described
in an xml file. This file has many potential customizations depending on
the feature set supported by a particular VMM. Many of the values are
dynamically determined by Nimbus, however a site admin may want to add their
own custom optimizations.
</p>
<p>
The xml used for every VM submission is generated from a template found
in the workspace control installation at:
<tt class="literal">
/etc/workspace-control/libvirt_template.xml
</tt>
Admins can add site specific optimizations (like virtio) to this template.
</p>

<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />

_NIMBUS_CENTER2_COLUMN_END
_NIMBUS_FOOTER1
_NIMBUS_FOOTER2
_NIMBUS_FOOTER3
Something went wrong with that request. Please try again.