/
ch_install.xml
617 lines (613 loc) · 38.3 KB
/
ch_install.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
<?xml version="1.0" encoding="UTF-8"?>
<chapter
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_install">
<title>OpenStack Networking Installation</title>
<para> This chapter describes how to install the OpenStack Networking service
and get it up and running. </para>
<para>If you are building a host from scratch to use for OpenStack Networking,
we strongly recommend using Ubuntu 12.04/12.10 or Fedora 17/18
as these platforms have OpenStack Networking packages and receive
significant testing.</para>
<para>OpenStack Networking requires at least dnsmasq 2.59, to support all the
options it requires.</para>
<section xml:id="install_ubuntu">
<title>Install Packages (Ubuntu) </title>
<note>
<para>We are using Ubuntu cloud archive you can read more
about it Explanation of each possible sources.list
entry can be found here: <link
xlink:href="http://blog.canonical.com/2012/09/14/now-you-can-have-your-openstack-cake-and-eat-it/"
>http://bit.ly/Q8OJ9M </link></para>
</note>
<para>Point to Grizzly PPAs:
</para>
<screen><prompt>#</prompt> <userinput>echo deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main >>/etc/apt/sources.list.d/grizzly.list</userinput>
<prompt>#</prompt> <userinput>apt-get install ubuntu-cloud-keyring </userinput>
<prompt>#</prompt> <userinput>apt-get update</userinput>
<prompt>#</prompt> <userinput>apt-get upgrade</userinput> </screen>
<note>
<para> Please use "sudo" in order to install and configure
packages with superuser privileges.</para>
</note>
<section xml:id="install_quantum_server">
<title>Install quantum-server </title>
<para>Install quantum-server and CLI for accessing the
API: </para>
<screen><computeroutput>apt-get -y install quantum-server python-quantumclient</computeroutput></screen>
<para>You will also want to install the plugin you choose
to use, for example: </para>
<screen><computeroutput>apt-get -y install quantum-plugin-<plugin-name></computeroutput></screen>
<para>Most plugins require a database to be installed and
configured in a plugin configuration file. For
example: </para>
<screen><computeroutput>apt-get -y install mysql-server python-mysqldb python-sqlalchemy </computeroutput></screen>
<para>A database that you are already using for other
OpenStack services will work fine for this. Simply
create a ‘quantum’ database: </para>
<screen><computeroutput>mysql -u <user> -p <pass> -e “create database quantum”</computeroutput></screen>
<para>And then configure the plugin’s configuration file
to use this database. Find the plugin configuration
file in <filename>/etc/quantum/plugins/<plugin-name></filename> (For
example,
<filename>/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini</filename>)
and set: </para>
<screen><computeroutput>sql_connection = mysql://<user>:<password>@localhost/quantum?charset=utf8</computeroutput></screen>
<section xml:id="rpc_setup">
<title>RPC Setup </title>
<para>Many OpenStack Networking plugins uses RPC to
allow agents to communicate with the main
quantum-server process. If your plugin requires agents,
this can use the same RPC mechanism used by other OpenStack
components like Nova. </para>
<para>To use RabbitMQ as the message bus for RPC, make
sure that rabbit is installed on a host reachable
via the management network (if this is already the
case because of deploying another service like
Nova, this existing RabbitMQ setup is
sufficient): </para>
<screen><computeroutput>apt-get install rabbitmq-server
rabbitmqctl change_password guest <password></computeroutput></screen>
<para>Then update /etc/quantum/quantum.conf with these
values: </para>
<screen><computeroutput>rabbit_host=<mgmt-IP-of-rabbit-host>
rabbit_password=<password>
rabbit_userid=guest </computeroutput></screen>
<important>
<para>This /etc/quantum/quantum.conf file should be
copied to and used on all hosts running
quantum-server or any quantum-*-agent binaries. </para>
</important>
</section>
<section xml:id="openvswitch_plugin">
<title>Plugin Configuration: OVS Plugin</title>
<para>Using the Open vSwitch (OVS) plugin in a
deployment with multiple hosts requires the using
of either tunneling or vlans in order to isolate
traffic from multiple networks. Tunneling is
easier to deploy, as it does not require
configuring VLANs on network switches, so that is
what we describe here. More advanced deployment
options are described in the <link
linkend="ch_adv_config"/></para>
<para>Edit
<filename>/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini</filename>
to specify the following values: </para>
<screen><computeroutput>enable_tunneling=True
tenant_network_type=gre
tunnel_id_ranges=1:1000
# only required for nodes running agents
local_ip=<data-net-IP-address-of-node></computeroutput></screen>
<para>After performing that change on the node running
quantum-server, restart quantum-server to pick up
the new settings.</para>
<screen><computeroutput>service quantum-server restart</computeroutput></screen>
</section>
<section xml:id="nvp_plugin">
<title>Plugin Configuration: Nicira NVP Plugin</title>
<para> Make sure the NVP plugin is installed using:</para>
<screen><computeroutput>apt-get -y install quantum-plugin-nicira</computeroutput></screen>
<para>To configure OpenStack Networking to use the NVP plugin first
edit
<filename>/etc/quantum/quantum.conf</filename>
and set:</para>
<screen><computeroutput>core_plugin = quantum.plugins.nicira.nicira_nvp_plugin.QuantumPlugin.NvpPluginV2</computeroutput></screen>
<para>Edit
<filename>/etc/quantum/plugins/nicira/nvp.ini</filename>
in order to configure the plugin.</para>
<para>In the [DATABASE] section, specify the quantum database
created in the previous step using the following line,
substituting your database server IP address for localhost
if the database is not local:</para>
<screen><computeroutput>sql_connection = mysql://<user>:<password>@localhost/quantum?charset=utf8</computeroutput></screen>
<para>In order to tell OpenStack Networking about a controller
cluster, create a new [CLUSTER:<name>] section in the
config file, and add the following entries:</para>
<para>The UUID of the NVP Transport Zone that should be used
by default when a tenant creates a network. This value can
be retrieved from the NVP Manager Transport Zones page:</para>
<screen><computeroutput>default_tz_uuid = <uuid_of_the_transport_zone></computeroutput></screen>
<para>A connection string indicating parameters to be used by
the NVP plugin when connecting to the NVP webservice
API. There will be one of these lines in the config file
for each NVP controller in your deployment. An NVP operator
will likely want to update the NVP controller IP and password,
but the remaining fields can be the defaults:</para>
<screen><computeroutput>nvp_controller_connection = <controller_node_ip>:<controller_port>:<api_user>:<api_password>:<request_timeout>:<http_timeout>:<retries>:<redirects></computeroutput></screen>
<para>The UUID of an NVP L3 Gateway Service that should be
used by default when a tenant creates a router. This value
can be retrieved from the NVP Manager Gateway Services page:
</para>
<screen><computeroutput>default_l3_gw_service_uuid = <uuid_of_the_gateway_service></computeroutput></screen>
<warning>
<para> Ubuntu packaging currently does not update the quantum
init script to point to the NVP config file. Instead,
manually update <filename>/etc/default/quantum-server
</filename> to set:</para>
<screen><computeroutput>QUANTUM_PLUGIN_CONFIG = /etc/quantum/plugins/nicira/nvp.ini</computeroutput></screen>
</warning>
<para>Lastly, restart quantum-server to pick up the
new settings.</para>
<screen><computeroutput>service quantum-server restart</computeroutput></screen>
<para>An example quantum.conf file to use with NVP would be:
</para>
<screen><computeroutput>core_plugin = quantum.plugins.nicira.nicira_nvp_plugin.QuantumPlugin.NvpPluginV2
rabbit_host = 192.168.203.10
allow_overlapping_ips = True
</computeroutput></screen>
<para>An example nvp.ini file to use with NVP would be:</para>
<screen><computeroutput>[DATABASE]
sql_connection=mysql://root:root@127.0.0.1/quantum
[CLUSTER:main]
default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33c
default_l3_gw_service_uuid=5c8622cc-240a-40a1-9693-e6a5fca4e3cf
nvp_controller_connection=10.0.0.2:443:admin:admin:30:10:2:2
nvp_controller_connection=10.0.0.3:443:admin:admin:30:10:2:2
nvp_controller_connection=10.0.0.4:443:admin:admin:30:10:2:2
</computeroutput></screen>
</section>
<section xml:id="bigswitch_floodlight_plugin">
<title>Configuring Big Switch, Floodlight REST Proxy Plugin</title>
<para>To configure OpenStack Networking to use the REST Proxy plugin first
edit
<filename>/etc/quantum/quantum.conf</filename>
and set:</para>
<screen><computeroutput>core_plugin = quantum.plugins.bigswitch.plugin.QuantumRestProxyV2</computeroutput></screen>
<para>Edit
<filename>/etc/quantum/plugins/bigswitch/restproxy.ini</filename>
in order to configure the plugin. The quantum
database created previously will be used by
setting:</para>
<screen><computeroutput>sql_connection = mysql://<user>:<password>@localhost/restproxy_quantum?charset=utf8</computeroutput></screen>
<para>Specify a comma separated list controller_ip:port pairs:</para>
<screen><computeroutput>server = <controller-ip>:<port></computeroutput></screen>
<para>Lastly, restart quantum-server to pick up the
new settings.</para>
<screen><computeroutput>service quantum-server restart</computeroutput></screen>
</section>
<section xml:id="ryu_plugin">
<title>Configuring Ryu Plugin</title>
<para>Make sure the ryu plugin is installed using:</para>
<screen><computeroutput>apt-get -y install quantum-plugin-ryu</computeroutput></screen>
<para>To configure OpenStack Networking to use the Ryu plugin first
edit
<filename>/etc/quantum/quantum.conf</filename>
and set:</para>
<screen><computeroutput>core_plugin = quantum.plugins.ryu.ryu_quantum_plugin.RyuQuantumPluginV2</computeroutput></screen>
<para>Edit
<filename>/etc/quantum/plugins/ryu/ryu.ini</filename>
in order to configure the plugin.
In the [DATABASE] section, specify the quantum database
created in the previous step using the following line,
substituting your database server user/password/IP address/port
based on your setting:</para>
<screen><computeroutput>sql_connection = mysql://<user>:<password>@<ip-address>:<port>/quantum?charset=utf8</computeroutput></screen>
<para>In [OVS] section, set the necessary values for
ryu-quantum-agent.
openflow_rest_api is used to tell where Ryu is
listening for REST API. Substitute ip-address and port-no
based on your ryu setup.
<literal>ovsdb_interface</literal> is used for Ryu
to access ovsdb-server. Substitute eth0 based on your
setup. IP address is derived from the interface name. If
you want to change those value irrelevant to the interface
name, ovsdb_ip can be specified. If you use non-default
port for ovsdb-server, it can be specified by ovsdb_port.
tunnel_interface needs to be set to tell what IP address is
used for tunneling. (If tunneling isn't used, this value
will be ignored.) The IP address is derived from the
network interface name. The same configuration file can be used
for many compute-node by using network interface name with
different IP address.
</para>
<screen><computeroutput>openflow_rest_api = <ip-address>:<port-no>
ovsdb_interface = <eth0>
tunnel_interface = <eth0>
</computeroutput></screen>
<para>Lastly, restart quantum-server to pick up the
new settings.</para>
<screen><computeroutput>service quantum-server restart</computeroutput></screen>
</section>
<section xml:id="PLUMgridplugin">
<title>Configuring PLUMgrid Plugin</title>
<para>To configure OpenStack Networking to use the
PLUMgrid plugin first edit
<filename>/etc/quantum/quantum.conf</filename>
and set:</para>
<screen><computeroutput>core_plugin = quantum.plugins.plumgrid.plumgrid_nos_plugin.plumgrid_plugin.QuantumPluginPLUMgridV2</computeroutput></screen>
<para>Edit
<filename>/etc/quantum/plugins/plumgrid/plumgrid.ini</filename>
in order to configure the plugin. The quantum
database created previously will be used by
setting:</para>
<screen><computeroutput>sql_connection = mysql://<user>:<password>@localhost/plumgrid_quantum?charset=utf8</computeroutput></screen>
<para>Under the [PLUMgridNOS] section specify the IP
address of the PLUMgrid director also
known as NOS. Behind the director information
admin username and password are also
required:</para>
<screen><computeroutput>servers=<plumgrid_NOS_IP>
username=<username>
password=<password></computeroutput></screen>
<para>Lastly, restart quantum-server to pick up the
new settings.</para>
<screen><computeroutput>service quantum-server restart</computeroutput></screen>
</section>
</section>
<section xml:id="install_quantum_agent">
<title>Install Software on Data Forwarding Nodes</title>
<para>Plugins commonly have requirements for particular software
that must be run on each node that handles data packets. This
includes any node running nova-compute, as well as nodes
running dedicated OpenStack Networking service agents like
quantum-dhcp-agent, quantum-l3-agent, quantum-lbaas-agent,
etc (see below for more information about
individual services agents).</para>
<para>Commonly, any data forwarding node should have a network
interface with an IP address on the “management
network” and another interface on the “data network”. </para>
<para>In this section, we describe the requirements
for particular plugins, which may include the installation of
switching software (e.g., Open vSwitch) as well as agents
used to communicate with the quantum-server process
running elsewhere in the data center.</para>
<section xml:id="install_quantum_agent_ovs">
<title>Node Setup: OVS Plugin</title>
<para>The Open vSwitch plugin requires Open vSwitch as well
as the quantum-plugin-openvswitch-agent agent
to be installed on each Data Forwarding Node.</para>
<para>Install the OVS agent package, will pull in the
Open vSwitch software as a dependency: </para>
<screen><computeroutput>apt-get -y install quantum-plugin-openvswitch-agent</computeroutput></screen>
<para>The ovs_quantum_plugin.ini created in the above
step must be replicated on all nodes
quantum-plugin-openvswitch-agent. When using
tunneling, each node running
quantum-plugin-openvswitch agent should have an IP
address configured on the Data Network, and that
IP address should be specified using the local_ip
value in the ovs_quantum_plugin.ini file. </para>
<para>Then restart Open vSwitch to properly load the kernel
module:</para>
<screen><userinput>service openvswitch-switch restart</userinput></screen>
<para>And restart the agent:</para>
<screen><computeroutput>service quantum-plugin-openvswitch-agent restart</computeroutput></screen>
<para>All hosts running
quantum-plugin-openvswitch-agent also requires
that an OVS bridge named "br-int" exists. To
create it, run:</para>
<screen><computeroutput>ovs-vsctl add-br br-int</computeroutput></screen>
</section>
<section xml:id="install_quantum_agent_nvp">
<title>Node Setup: Nicira NVP Plugin</title>
<para>The Nicira NVP plugin requires a version of Open vSwitch to be installed on each data forwarding node, but
does not require an additional agent on data forwarding nodes.</para>
<warning><para>It is critical that you are running a version of
Open vSwitch that is compatible with the current version of the NVP Controller software. Do not use the version of
Open vSwitch installed by default on Ubuntu. Instead, use the version of Open Vswitch provided on the Nicira
support portal for your version of the NVP Controller.</para></warning>
<para>Each data forwarding node should have an IP address on the "management network", as well as an IP address
on the "data network" used for tunneling data traffic.</para>
<para>For full details on configuring your forwarding node, please see the NVP Administrator Guide. Next, use
the same guide to add the node as a "Hypervisor" using the NVP Manager GUI (Note: even if your forwarding node
has no VMs and is only used for services agents like quantum-dhcp-agent or quantum-lbaas-agent, it should be
added to NVP as a Hypervisor).</para>
<para>After following the NVP Administrator Guide, use the page for this Hypervisor in the NVP Manager GUI
to confirm that the node is properly connected to the NVP
Controller Cluster and that the NVP Controller Cluster is seeing the integration bridge "br-int".</para>
</section>
<section xml:id="install_quantum_agent_ryu">
<title>Node Setup: Ryu Plugin</title>
<para>The Ryu plugin requires Open vSwitch and ryu. Please install ryu and openvswitch
in addition to ryu agent package.</para>
<para>Install ryu. There isn't ryu package for ubuntu yet.</para>
<screen><computeroutput>pip install ryu</computeroutput></screen>
<para>Install the Ryu agent package and openvswitch package: </para>
<screen><computeroutput>apt-get -y install quantum-plugin-ryu-agent openvswitch-switch python-openvswitch openvswitch-datapath-dkms</computeroutput></screen>
<para>The ovs_ryu_plugin.ini and quantum.conf created in the above
step must be replicated on all nodes
quantum-plugin-ryu-agent. </para>
<para>Then restart the agent</para>
<screen><prompt>$</prompt> <userinput>sudo service quantum-plugin-ryu-agent restart</userinput></screen>
<para>Then restart Open vSwitch to properly load the kernel
module:</para>
<screen><prompt>$</prompt> <userinput>sudo service openvswitch-switch restart</userinput></screen>
<para>And restart the agent:</para>
<screen><prompt>$</prompt> <userinput>sudo service quantum-plugin-ryu-agent restart</userinput></screen>
<para>All hosts running
quantum-plugin-ryu-agent also requires
that an OVS bridge named "br-int" exists. To
create it, run:</para>
<screen><computeroutput>ovs-vsctl add-br br-int</computeroutput></screen>
</section>
</section>
<section xml:id="install_quantum_dhcp">
<title>Install DHCP Agent</title>
<para>The DHCP service agent is compatible with all existing plugins and is required for all deployments
where VMs should automatically receive IP addresses via DHCP.</para>
<para>The host running the quantum-dhcp-agent must be configured as a "data forwarding node" according to your
plugin's requirements (see section above).</para>
<para>In addition, you must install the DHCP agent:</para>
<screen><computeroutput>apt-get -y install quantum-dhcp-agent</computeroutput></screen>
<para>Some options in <filename>/etc/quantum/dhcp_agent.ini</filename> must have certain values that
depend on the plugin in use. The sub-sections below will indicate those values for certain plugins.</para>
<section xml:id="dhcp_agent_ovs">
<title>DHCP Agent Setup: OVS Plugin</title>
<para>The following DHCP agent options are required for the OVS plugin:</para>
<screen><computeroutput>
[DEFAULT]
ovs_use_veth = True
enable_isolated_metadata = True
use_namespaces = True
interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
</computeroutput></screen>
</section>
<section xml:id="dhcp_agent_nvp">
<title>DHCP Agent Setup: NVP Plugin</title>
<para>The following DHCP agent options are required for the NVP plugin:</para>
<screen><computeroutput>
[DEFAULT]
ovs_use_veth = True
enable_metadata_network = True
enable_isolated_metadata = True
use_namespaces = True
interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
</computeroutput></screen>
</section>
<section xml:id="dhcp_agent_ryu">
<title>DHCP Agent Setup: Ryu Plugin</title>
<para>The following DHCP agent options are required for the Ryu plugin:</para>
<screen><computeroutput>
[DEFAULT]
ovs_use_veth = True
use_namespace = True
interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
</computeroutput></screen>
</section>
</section>
<section xml:id="install_quantum-l3">
<title>Install L3 Agent</title>
<para>Quantum has a widely used API extension to allow administrators and tenants to create "routers" that
connect to L2 networks.</para>
<para>Many plugins rely on the L3 service agent to implement this L3 functionality.
However, the following plugins have built in L3 capabilities:
</para>
<para>
<itemizedlist>
<listitem><para>Nicira NVP Plugin</para></listitem>
<listitem><para>Floodlight/BigSwitch Plugin (L3 functionality with BigSwitch only)</para></listitem>
<listitem><para>PLUMgrid Plugin</para></listitem>
</itemizedlist>
</para>
<warning>
<para> Do NOT configure or use <filename>quantum-l3-agent</filename> if you are using
one of the above plugins.</para>
</warning>
<note><para>The Floodlight/BigSwitch plugin supports both the open source <link
xlink:href="http://www.projectfloodlight.org/floodlight/">Floodlight</link>
controller and the proprietary BigSwitch controller. However, only the
proprietary BigSwitch controller implements L3 functionality. When using
Floodlight as your OpenFlow controller, L3 functionality is not available. </para></note>
<para>For all other plugins, install the quantum-l3-agent binary on the network node. </para>
<screen><computeroutput>apt-get -y install quantum-l3-agent</computeroutput></screen>
<para>Create a bridge "br-ex" that will be used to uplink
this node running quantum-l3-agent to the external
network, then attach the NIC attached to the external
network to this bridge.</para>
<para>For example, with Open vSwitch and NIC eth1 connect
to the external network, run:</para>
<screen><computeroutput>ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth1</computeroutput></screen>
<para>The node running quantum-l3-agent should not have an
IP address manually configured on the NIC connected to
the external network. Rather, you must have a range of
IP addresses from the external network that can be
used by OpenStack Networking for routers that uplink to the
external network. This range must be large enough to
have an IP address for each router in the deployment,
as well as each floating IP.</para>
<para> The quantum-l3-agent uses the Linux IP stack and
iptables to perform L3 forwarding and NAT. In order to
support multiple routers with potentially overlapping
IP addresses, quantum-l3-agent defaults to using Linux
network namespaces to provide isolated forwarding
contexts. As a result, the IP addresses of routers
will not be visible simply by running "ip addr list"
or "ifconfig" on the node. Similarly, you will not be
able to directly ping fixed IPs. To do either of these
things, you must run the command within a particular
router's network namespace. The namespace will have
the name "qrouter-<UUID of the router>. The
following commands are examples of running commands in
the namespace of a router with UUID
47af3868-0fa8-4447-85f6-1304de32153b: </para>
<screen>
<computeroutput>ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list
ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping <fixed-ip></computeroutput>
</screen>
</section>
<section xml:id="install_quantum_client">
<title>Install OpenStack Networking CLI Client</title>
<para>Install the OpenStack Networking CLI client:</para>
<screen><computeroutput>apt-get -y install python-pyparsing python-cliff python-quantumclient</computeroutput></screen>
</section>
<section xml:id="init_config">
<title>Init, Config, and Log File Locations</title>
<para>Services can be started and stopped using the
'service' command. For example:</para>
<screen><computeroutput>service quantum-server stop
service quantum-server status
service quantum-server start
service quantum-server restart</computeroutput></screen>
<para> Log files are found in /var/log/quantum. </para>
<para> Configuration files are in /etc/quantum.</para>
</section>
</section>
<section xml:id="install_fedora">
<title>Installing Packages (Fedora) </title>
<para>The OpenStack packages for Fedora can be retrieved from:
<uri>https://apps.fedoraproject.org/packages/s/openstack</uri>. Additional
information can be found at <link
xlink:href="https://fedoraproject.org/wiki/OpenStack"
>https://fedoraproject.org/wiki/OpenStack</link></para>
<section xml:id="fedora_rpc_setup">
<title xml:id="qpid_rpc_setup">RPC Setup </title>
<para>OpenStack Networking uses RPC to allow DHCP agents and any plugin
agents to communicate with the main quantum-server
process. Commonly, this can use the same RPC
mechanism used by other OpenStack components like
Nova.</para>
<para>To use QPID AMQP as the message bus for RPC, make
sure that QPID is installed on a host reachable via
the management network (if this is already the case
because of deploying another service like Nova, this
existing QPID setup is sufficient): </para>
<screen><computeroutput>sudo yum -y install qpid-cpp-server qpid-cpp-server-daemon
sudo chkconfig qpidd on
sudo service qpidd start</computeroutput></screen>
<para>Then update /etc/quantum/quantum.conf with these
values: </para>
<screen><computeroutput>rpc_backend = quantum.openstack.common.rpc.impl_qpid
qpid_hostname = <mgmt-IP-of-qpid-host></computeroutput></screen>
<important>
<para>The Fedora packaging has a number of utility
scripts that configure all of the necessary
configuration files. The scripts can also be used
to understand what needs to be configured for the
specific OpenStack Networking services. The scripts will be
described below. Please note that the scripts make
use of the package openstack-utils. Please
install:</para>
<para>
<screen><computeroutput>sudo yum install -y openstack-utils</computeroutput></screen>
</para>
</important>
</section>
<section xml:id="fedora_q_server">
<title>Install quantum-server and plugin </title>
<para>Install quantum-server and plugin. <emphasis role="bold">Note</emphasis> the
client is installed as a dependency for the OpenStack Networking service. Each
plugin has its own package, named openstack-quantum-<plugin>. openvswitch will be
used in the examples below. A complete list of the supported plugins can be seen at:
<link
xlink:href="https://fedoraproject.org/wiki/Quantum#Quantum_Plugins"
>https://fedoraproject.org/wiki/Quantum#Quantum_Plugins</link>.</para>
<screen><computeroutput>sudo yum install -y openstack-quantum
sudo yum install -y openstack-quantum-openvswitch</computeroutput></screen>
<para>Most plugins require a database to be installed and
configured in a plugin configuration file. The Fedora
packaging for OpenStack Networking a server setup utility scripts
that will take care of this. For example: </para>
<screen><computeroutput>sudo quantum-server-setup --plugin openvswitch</computeroutput></screen>
<para>Enable and start the service:</para>
<screen><computeroutput>sudo chkconfig quantum-server on
sudo service quantum-server start</computeroutput></screen>
</section>
<section xml:id="fedora_q_plugin">
<title>Install quantum-plugin-*-agent</title>
<para>Some plugins utilize an agent that runs on each node
that handles data packets. This includes any node
running nova-compute, as well as nodes running
dedicated OpenStack Networking agents like quantum-dhcp-agent and
quantum-l3-agent (see below). If your plugin uses an
agent, this section describes how to run the agent for
this plugin, as well as the basic configuration
options.</para>
<section xml:id="fedora_q_agent">
<title>Open vSwitch Agent</title>
<para>Install the OVS agent: </para>
<screen><computeroutput>sudo yum install -y openstack-quantum-openvswitch</computeroutput></screen>
<para>Run the agent setup script:</para>
<screen><computeroutput>sudo quantum-node-setup --plugin openvswitch</computeroutput></screen>
<para>All hosts running quantum-plugin-openvswitch-agent also requires that an OVS
bridge named "br-int" exists. To create it, run:</para>
<screen><computeroutput>ovs-vsctl add-br br-int</computeroutput></screen>
<para>Enable and start the agent:</para>
<screen><computeroutput>sudo chkconfig quantum-openvswitch-agent on
sudo service quantum-openvswitch-agent start</computeroutput></screen>
<para>Enable the ovs cleanup utility:</para>
<screen><computeroutput>sudo chkconfig quantum-ovs-cleanup on</computeroutput></screen>
</section>
</section>
<section xml:id="fedora_q_dhcp">
<title>Install quantum-dhcp-agent</title>
<para>The DHCP agent is part of the openstack-quantum
package.</para>
<screen><computeroutput>sudo yum install -y openstack-quantum</computeroutput></screen>
<para>Run the agent setup script:</para>
<screen><computeroutput>sudo quantum-dhcp-setup --plugin openvswitch</computeroutput></screen>
<para>Enable and start the agent:</para>
<screen><computeroutput>sudo chkconfig quantum-dhcp-agent on
sudo service quantum-dhcp-agent start</computeroutput></screen>
</section>
<section xml:id="fedora_q_l3">
<title>Install quantum-l3-agent </title>
<para>The L3 agent is part of the openstack-quantum
package.</para>
<para>Create a bridge "br-ex" that will be used to uplink
this node running quantum-l3-agent to the external
network, then attach the NIC attached to the external
network to this bridge. For example, with Open vSwitch
and NIC eth1 connect to the external network,
run:</para>
<screen><computeroutput>sudo ovs-vsctl add-br br-ex
sudo ovs-vsctl add-port br-ex eth1</computeroutput></screen>
<para>The node running quantum-l3-agent should not have an
IP address manually configured on the NIC connected to
the external network. Rather, you must have a range of
IP addresses from the external network that can be
used by OpenStack Networking for routers that uplink to the
external network. This range must be large enough to
have an IP address for each router in the deployment,
as well as each floating IP.</para>
<screen><computeroutput>sudo yum install -y openstack-quantum</computeroutput></screen>
<para>Run the agent setup script:</para>
<screen><computeroutput>sudo quantum-l3-setup --plugin openvswitch</computeroutput></screen>
<para>Enable and start the agent:</para>
<screen><computeroutput>sudo chkconfig enable quantum-l3-agent on
sudo start start quantum-l3-agent</computeroutput></screen>
<para>Enable and start the meta data agent:</para>
<screen><computeroutput>sudo chkconfig quantum-metadata-agent on
sudo service quantum-metadata-agent start</computeroutput></screen>
</section>
<section xml:id="fedora_q_client">
<title>Install OpenStack Networking CLI client</title>
<para>Install the OpenStack Networking CLI client:</para>
<screen><computeroutput>sudo yum install -y python-quantumclient</computeroutput></screen>
</section>
<section xml:id="fedora_misc">
<title><?sbr?>Init, Config, and Log File Locations</title>
<para>Services can be started and stopped using the
'service' command. For example:</para>
<screen><computeroutput>sudo service quantum-server stop
sudo service quantum-server status
sudo service quantum-server start
sudo service quantum-server restart</computeroutput></screen>
<para>Log files are found in /var/log/quantum. </para>
<para>Configuration files are in /etc/quantum.</para>
</section>
</section>
</chapter>