Skip to content

Commit

Permalink
Remove Essex/Folsom cruft from Grizzly guides
Browse files Browse the repository at this point in the history
partially fixes bug 1087484

With Grizzly a couple months away, the documentation team is working
on its documentation as a priority. As part of the release of new
documents, references to previous releases that are no longer useful
or relevent should be removed.

This change:
* removes reference to pre-essex legacy options on keystone import
* removes deleted simple scheduler from glossary
* clarifies XenAPI pools for current status
* removes note about Essex client mismatch in reset-stat
* removes references to Essex and Diablo from dodai-deploy docs
* removes an outdated (Essex) configuration file example
* updates several links to point to Grizzly versions
* removes outdated (Essex) ISO distribution information
* removes outdated (Diablo, Essex) vnc proxy information
* updates guide titles and versions where appropriate
* removes old notes about config syntax

Change-Id: I81fab078f17a003c12b7f86cb266892b89c79a94
  • Loading branch information
fifieldt committed Feb 1, 2013
1 parent 45c0b3a commit 6128ab5
Show file tree
Hide file tree
Showing 11 changed files with 34 additions and 314 deletions.
12 changes: 0 additions & 12 deletions doc/src/docbkx/common/ch_identity_mgmt.xml
Expand Up @@ -169,18 +169,6 @@ keystone-all
<literal>db_sync</literal>: Sync the database.
</para>
</listitem>
<listitem>
<para>
<literal>import_legacy</literal>: Import a legacy (pre-essex)
version of the db.
</para>
</listitem>
<listitem>
<para>
<literal>export_legacy_catalog</literal>: Export service
catalog from a legacy (pre-essex) db.
</para>
</listitem>
<listitem>
<para>
<literal>import_nova_auth</literal>: Load auth data from a
Expand Down
7 changes: 0 additions & 7 deletions doc/src/docbkx/common/glossary/glossary-terms.xml
Expand Up @@ -4070,13 +4070,6 @@
cloud, currently unsupported by OpenStack.</para>
</glossdef>
</glossentry>
<glossentry>
<glossterm>Simple Scheduler</glossterm>
<glossdef>
<para>Volume scheduler type within Nova, deprecated in
Folsom release.</para>
</glossdef>
</glossentry>
<glossentry>
<glossterm>Single-root I/O Virtualization
(SR-IOV)</glossterm>
Expand Down
7 changes: 2 additions & 5 deletions doc/src/docbkx/common/introduction-to-xen.xml
Expand Up @@ -215,12 +215,9 @@ Some notes on the networking:
<section xml:id="pools">
<title>XenAPI pools</title>

<para>Before OpenStack 2012.1 ("Essex"), all XenServer machines used with
OpenStack are standalone machines, usually only using local storage.</para>

<para>However in 2012.1 and later, the host-aggregates feature allows you to
<para>The host-aggregates feature allows you to
create pools of XenServer hosts (configuring shared storage is still an out of
band activity). This move will enable live migration when using shared
band activity), to enable live migration when using shared
storage.</para>

</section>
Expand Down
10 changes: 1 addition & 9 deletions doc/src/docbkx/common/support-compute.xml
Expand Up @@ -46,14 +46,6 @@
<prompt>$</prompt> <userinput>nova delete c6bbbf26-b40a-47e7-8d5c-eb17bf65c485</userinput></screen></para>
<para>You can also use the <literal>--active</literal> to force the instance back into
an active state instead of an error state, for example:<screen><prompt>$</prompt> <userinput>nova reset-state --active c6bbbf26-b40a-47e7-8d5c-eb17bf65c485</userinput></screen>
<note>
<para>The version of the <command>nova</command> client that ships with Essex on
most distributions does not support the <literal>reset-state</literal>
command. You can download a more recent version of the
<command>nova</command> client from PyPI. The package name is <link
xlink:href="http://pypi.python.org/pypi/python-novaclient/"
>python-novaclient</link>, which can be installed using a Python package
tool such as pip.</para>
</note></para>
</para>
</section>
</chapter>
30 changes: 1 addition & 29 deletions doc/src/docbkx/openstack-compute-admin/computeautomation.xml
Expand Up @@ -40,12 +40,6 @@ format="SVG" scale="60"/>
<listitem>
<para>OpenStack Folsom(Compute, Glance, Swift, Keystone). Compute includes Nova, Horizon, Cinder and Quantum.</para>
</listitem>
<listitem>
<para>OpenStack Essex(Nova with dashboard, Glance, Swift, Keystone)</para>
</listitem>
<listitem>
<para>OpenStack Diablo(Nova, Glance, Swift)</para>
</listitem>
<listitem>
<para>hadoop 0.22.0</para>
</listitem>
Expand Down Expand Up @@ -97,20 +91,6 @@ format="SVG" scale="60"/>
<td><para></para></td>
<td><para>:)</para></td>
</tr>
<tr>
<td><para>OpenStack Essex (Nova with dashboard, Glance, Swift, Keystone)</para></td>
<td><para></para></td>
<td><para></para></td>
<td><para></para></td>
<td><para>:)</para></td>
</tr>
<tr>
<td><para>OpenStack Diablo (Nova, Glance, Swift)</para></td>
<td><para>:)</para></td>
<td><para>:)</para></td>
<td><para>:)</para></td>
<td><para></para></td>
</tr>
<tr>
<td><para>hadoop 0.22.0</para></td>
<td><para>:)</para></td>
Expand Down Expand Up @@ -295,15 +275,7 @@ format="SVG" scale="60"/>
<para>SSH login nova instance after test of nova </para>
<para>An instance will be started during the test of nova. After the test,
you can login the instance by executing the following commands.</para>
<para>For openstack nova diablo,</para>
<screen>
<prompt>$</prompt> <userinput>sudo -i</userinput>
<prompt>$</prompt> <userinput>cd /tmp/nova</userinput>
<prompt>$</prompt> <userinput>. env/novarc</userinput>
<prompt>$</prompt> <userinput>euca-describe-instances</userinput>
<prompt>$</prompt> <userinput>ssh -i mykey.priv 10.0.0.3</userinput>
</screen>
<para>For openstack nova essex and folsom,</para>
<para>For openstack nova folsom,</para>
<screen>
<prompt>$</prompt> <userinput>sudo -i</userinput>
<prompt>$</prompt> <userinput>cd /var/lib/nova</userinput>
Expand Down
168 changes: 6 additions & 162 deletions doc/src/docbkx/openstack-compute-admin/computeconfigure.xml
Expand Up @@ -415,168 +415,12 @@ $ <userinput>sudo service nova-compute restart</userinput></screen>
</section>

<section xml:id="sample-nova-configuration-files">
<title>Example <filename>nova.conf</filename> Configuration Files</title>

<para>The following sections describe many of the configuration
option settings that can go into the
<filename>nova.conf</filename> files. Copies of each
<filename>nova.conf</filename> file need to be copied to each
compute node. Here are some sample
<filename>nova.conf</filename> files that offer examples of
specific configurations.</para>

<simplesect>
<title>Essex configuration using KVM, FlatDHCP, MySQL, Glance,
LDAP, and optionally sheepdog, API is EC2</title>

<para>From <link xlink:href="http://gerrit.wikimedia.org"
>gerrit.wikimedia.org</link>, used with permission. Where
you see parameters passed in, they are reading from Puppet
configuration files. For example, a variable like &lt;%=
novaconfig["my_ip"] %&gt; is for the puppet templates they use
to deploy.</para>

<programlisting>
[DEFAULT]

verbose=True
auth_strategy=keystone
connection_type=libvirt
root_helper=sudo /usr/bin/nova-rootwrap
instance_name_template=i-%08x
daemonize=1
scheduler_driver=nova.scheduler.simple.SimpleScheduler
max_cores=200
my_ip=&lt;%= novaconfig["my_ip"] %&gt;
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
sql_connection=mysql://&lt;%= novaconfig["db_user"] %&gt;:&lt;%= novaconfig["db_pass"] %&gt;@&lt;%= novaconfig["db_host"] %&gt;/&lt;%= novaconfig["db_name"] %&gt;
image_service=nova.image.glance.GlanceImageService
s3_host=&lt;%= novaconfig["glance_host"] %&gt;
glance_api_servers=&lt;%= novaconfig["glance_host"] %&gt;:9292
rabbit_host=&lt;%= novaconfig["rabbit_host"] %&gt;
cc_host=&lt;%= novaconfig["cc_host"] %&gt;
network_host=&lt;%= novaconfig["network_host"] %&gt;
ec2_url=http://&lt;%= novaconfig["api_host"] %&gt;:8773/services/Cloud
ec2_dmz_host=&lt;%= novaconfig["api_ip"] %&gt;
dmz_cidr=&lt;%= novaconfig["dmz_cidr"] %&gt;
libvirt_type=&lt;%= novaconfig["libvirt_type"] %&gt;
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
flat_network_dhcp_start=&lt;%= novaconfig["dhcp_start"] %&gt;
dhcp_domain=&lt;%= novaconfig["dhcp_domain"] %&gt;
network_manager=nova.network.manager.FlatDHCPManager
flat_interface=&lt;%= novaconfig["network_flat_interface"] %&gt;
flat_injected=False
flat_network_bridge=&lt;%= novaconfig["flat_network_bridge"] %&gt;
fixed_range=&lt;%= novaconfig["fixed_range"] %&gt;
public_interface=&lt;%= novaconfig["network_public_interface"] %&gt;
routing_source_ip=&lt;%= novaconfig["network_public_ip"] %&gt;
node_availability_zone=&lt;%= novaconfig["zone"] %&gt;
zone_name=&lt;%= novaconfig["zone"] %&gt;
quota_floating_ips=&lt;%= novaconfig["quota_floating_ips"] %&gt;
multi_host=True
api_paste_config=/etc/nova/api-paste.ini
#use_ipv6=True
allow_same_net_traffic=False
live_migration_uri=&lt;%= novaconfig["live_migration_uri"] %&gt;
</programlisting>
<para>These represent configuration role classes used by the puppet configuration files to build
out the rest of the <filename>nova.conf</filename> file. </para>
<programlisting>
ldap_base_dn => "dc=wikimedia,dc=org",
ldap_user_dn => "uid=novaadmin,ou=people,dc=wikimedia,dc=org",
ldap_user_pass => $passwords::openstack::nova::nova_ldap_user_pass,
ldap_proxyagent => "cn=proxyagent,ou=profile,dc=wikimedia,dc=org",
ldap_proxyagent_pass => $passwords::openstack::nova::nova_ldap_proxyagent_pass,
controller_mysql_root_pass => $passwords::openstack::nova::controller_mysql_root_pass,
puppet_db_name => "puppet",
puppet_db_user => "puppet",
puppet_db_pass => $passwords::openstack::nova::nova_puppet_user_pass,
# By default, don't allow projects to allocate public IPs; this way we can
# let users have network admin rights, for firewall rules and such, and can
# give them public ips by increasing their quota
quota_floating_ips => "0",
libvirt_type => $realm ? {
"production" => "kvm",
"labs" => "qemu",
db_host => $controller_hostname,
dhcp_domain => "pmtpa.wmflabs",
glance_host => $controller_hostname,
rabbit_host => $controller_hostname,
cc_host => $controller_hostname,
network_flat_interface => $realm ? {
"production" => "eth1.103",
"labs" => "eth0.103",
},
network_flat_interface_name => $realm ? {
"production" => "eth1",
"labs" => "eth0",
},
network_flat_interface_vlan => "103",
flat_network_bridge => "br103",
network_public_interface => "eth0",
network_host => $realm ? {
"production" => "10.4.0.1",
"labs" => "127.0.0.1",
},
api_host => $realm ? {
"production" => "virt2.pmtpa.wmnet",
"labs" => "localhost",
},
api_ip => $realm ? {
"production" => "10.4.0.1",
"labs" => "127.0.0.1",
},
fixed_range => $realm ? {
"production" => "10.4.0.0/24",
"labs" => "192.168.0.0/24",
},
dhcp_start => $realm ? {
"production" => "10.4.0.4",
"labs" => "192.168.0.4",
},
network_public_ip => $realm ? {
"production" => "208.80.153.192",
"labs" => "127.0.0.1",
},
dmz_cidr => $realm ? {
"production" => "208.80.153.0/22,10.0.0.0/8",
"labs" => "10.4.0.0/24",
},
controller_hostname => $realm ? {
"production" => "labsconsole.wikimedia.org",
"labs" => $fqdn,
},
ajax_proxy_url => $realm ? {
"production" => "http://labsconsole.wikimedia.org:8000",
"labs" => "http://${hostname}.${domain}:8000",
},
ldap_host => $controller_hostname,
puppet_host => $controller_hostname,
puppet_db_host => $controller_hostname,
live_migration_uri => "qemu://%s.pmtpa.wmnet/system?pkipath=/var/lib/nova",
zone => "pmtpa",
keystone_admin_token => $keystoneconfig["admin_token"],
keystone_auth_host => $keystoneconfig["bind_ip"],
keystone_auth_protocol => $keystoneconfig["auth_protocol"],
keystone_auth_port => $keystoneconfig["auth_port"],
</programlisting>

<figure xml:id="Nova_conf_KVM_LDAP">
<title>KVM, FlatDHCP, MySQL, Glance, LDAP, and optionally
sheepdog</title>

<mediaobject>
<imageobject>
<imagedata fileref="figures/SCH_5003_V00_NUAC-Network_mode_KVM_LDAP_OpenStack.png"
scale="60"/>
</imageobject>
</mediaobject>
</figure>
</simplesect>

<title>Example <filename>nova.conf</filename> Configuration Files</title>
<para>
The following sections describe many of the configuration option settings that can go into the
nova.conf files. Copies of each nova.conf file need to be copied to each compute node. Here are
some sample nova.conf files that offer examples of specific configurations.
</para>
<simplesect>
<title>KVM, Flat, MySQL, and Glance, OpenStack or EC2 API</title>

Expand Down
42 changes: 10 additions & 32 deletions doc/src/docbkx/openstack-compute-admin/computeinstall.xml
Expand Up @@ -160,7 +160,7 @@
<term><link
xlink:href="http://docs.openstack.org/trunk/openstack-compute/install/yum/content/"
>OpenStack Install and Deploy Manual - Red Hat</link>
(Folsom)</term>
(Grizzly)</term>
<listitem>
<para>This guide walks through an installation using
packages available through Fedora 17 as well as on RHEL
Expand All @@ -171,33 +171,21 @@
</varlistentry>
<varlistentry>
<term><link
xlink:href="https://fedoraproject.org/wiki/Getting_started_with_OpenStack_on_Fedora_17"
>Getting Started with OpenStack on Fedora 17</link>
(Essex)</term>
xlink:href="https://fedoraproject.org/wiki/Getting_started_with_OpenStack_on_Fedora_18"
>Getting Started with OpenStack on Fedora 18</link>
(Folsom)</term>

<listitem>
<para>The Essex release is in Fedora 17. This page discusses the
installation of Essex on Fedora 17. Once EPEL 6 has been updated to
include Essex, these instructions should be used if installing on
RHEL 6. The main difference between the Fedora 17 instructions and
<para>The Folsom release is in Fedora 18. This page discusses the
installation of Folsom on Fedora 18. Once EPEL 6 has been updated to
include Folsom, these instructions should be used if installing on
RHEL 6. The main difference between the Fedora 18 instructions and
what must be done on RHEL 6 is that RHEL 6 does not use systemd, so
the <command>systemctl</command> commands will have to substituted
with the RHEL 6 equivalent.</para>
</listitem>
</varlistentry>

<varlistentry>
<term><link
xlink:href="https://fedoraproject.org/wiki/Getting_started_with_OpenStack_Nova"
>Getting Started with OpenStack Nova</link> (Fedora 16/
Diablo)</term>

<listitem>
<para>This page was originally written as instructions for
getting started with OpenStack on Fedora 16, which
includes the Diablo release.</para>
</listitem>
</varlistentry>
</variablelist>
</section>

Expand Down Expand Up @@ -315,16 +303,6 @@ All repositories have been refreshed.</computeroutput>
<section xml:id="iso-ubuntu-installation">
<title>ISO Installation</title>

<para>Two ISO distributions are available for Essex: </para>
<para>See <link
xlink:href="http://sourceforge.net/projects/stackops/files/"
>http://sourceforge.net/projects/stackops/files/</link> for
download files and information, license information, and a
<filename>README</filename> file. For documentation on the
StackOps ISO, see <link xlink:href="http://docs.stackops.org"
>http://docs.stackops.org</link>. For free support, go to
<link xlink:href="http://getsatisfaction.com/stackops"
>http://getsatisfaction.com/stackops</link>.</para>
<para>See <link
xlink:href="http://www.rackspace.com/knowledge_center/article/installing-rackspace-private-cloud-on-physical-hardware"
>Installing Rackspace Private Cloud on Physical
Expand All @@ -337,13 +315,13 @@ All repositories have been refreshed.</computeroutput>
<title>Scripted Installation</title>

<para>You can download a script for a standalone install for
proof-of-concept, learning, or for development purposes for Ubuntu 11.04
proof-of-concept, learning, or for development purposes for Ubuntu 12.04
at <link
xlink:href="http://devstack.org">https://devstack.org</link>.</para>

<orderedlist>
<listitem>
<para>Install Ubuntu 12.10 or RHEL/CentOS/Fedora 16:</para>
<para>Install Ubuntu 12.04 or RHEL/CentOS/Fedora 16:</para>

<para>In order to correctly install all the dependencies, we assume
a specific version of the OS to make it as easy as possible.</para>
Expand Down

0 comments on commit 6128ab5

Please sign in to comment.