-
Notifications
You must be signed in to change notification settings - Fork 35
/
planning-architecture-alternative-standalone_deployer.xml
213 lines (206 loc) · 7.37 KB
/
planning-architecture-alternative-standalone_deployer.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
<?xml version="1.0"?>
<!DOCTYPE section [
<!ENTITY % entities SYSTEM "entities.ent"> %entities;
]>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="standalone-deployer" version="5.1">
<title>Using a Dedicated &clm; Node</title>
<para>
All of the example configurations included host the &clm; on the first
&contrnode;. It is also possible to deploy this service on a dedicated
node. One use case for wanting to run the dedicated &clm; is to be able to
test the deployment of different configurations without having to re-install
the first server. Some administrators prefer the additional security of
keeping all of the configuration data on a separate server from those that
users of the cloud connect to (although all of the data can be encrypted and
SSH keys can be password protected).
</para>
<para>
Here is a graphical representation of this setup:
</para>
<informalfigure>
<mediaobject>
<imageobject role="fo">
<imagedata fileref="media-examples-entry_scale_kvm.png" width="75%"/>
</imageobject>
<imageobject role="html">
<imagedata fileref="media-examples-entry_scale_kvm.png"/>
</imageobject>
</mediaobject>
</informalfigure>
<section xml:id="sec-specify-lifecycle-manager">
<title>Specifying a dedicated &clm; in your input model</title>
<para>
To specify a dedicated &clm; in your input model, make the following edits
to your configuration files.
</para>
<important>
<para>
The indentation of each of the input files is important and will cause
errors if not done correctly. Use the existing content in each of these
files as a reference when adding additional content for your &clm;.
</para>
</important>
<itemizedlist>
<listitem>
<para>
Update <filename>control_plane.yml</filename> to add the &clm;.
</para>
</listitem>
<listitem>
<para>
Update <filename>server_roles.yml</filename> to add the &clm; role.
</para>
</listitem>
<listitem>
<para>
Update <filename>net_interfaces.yml</filename> to add the interface
definition for the &clm;.
</para>
</listitem>
<listitem>
<para>
Create a <filename>disks_lifecycle_manager.yml</filename> file to define
the disk layout for the &clm;.
</para>
</listitem>
<listitem>
<para>
Update <filename>servers.yml</filename> to add the dedicated &clm; node.
</para>
</listitem>
</itemizedlist>
<para>
<filename>Control_plane.yml</filename>: The snippet below shows the addition
of a single node cluster into the control plane to host the &clm; service.
Note that, in addition to adding the new cluster, you also have to remove
the &clm; component from the <literal>cluster1</literal> in the examples:
</para>
<screen> clusters:
<emphasis role="bold"> - name: cluster0
cluster-prefix: c0
server-role: LIFECYCLE-MANAGER-ROLE
member-count: 1
allocation-policy: strict
service-components:
- lifecycle-manager</emphasis>
- ntp-client
- name: cluster1
cluster-prefix: c1
server-role: CONTROLLER-ROLE
member-count: 3
allocation-policy: strict
service-components:
- lifecycle-manager
- ntp-server
- tempest</screen>
<para>
This specifies a single node of role
<literal>LIFECYCLE-MANAGER-ROLE</literal> hosting the &clm;.
</para>
<para>
<filename>Server_roles.yml</filename>: The snippet below shows the insertion
of the new server roles definition:
</para>
<screen> server-roles:
<emphasis role="bold"> - name: LIFECYCLE-MANAGER-ROLE
interface-model: LIFECYCLE-MANAGER-INTERFACES
disk-model: LIFECYCLE-MANAGER-DISKS</emphasis>
- name: CONTROLLER-ROLE</screen>
<para>
This defines a new server role which references a new interface-model and
disk-model to be used when configuring the server.
</para>
<para>
<filename>net-interfaces.yml</filename>: The snippet below shows the
insertion of the network-interface info:
</para>
<screen><emphasis role="bold"> - name: LIFECYCLE-MANAGER-INTERFACES
network-interfaces:
- name: BOND0
device:
name: bond0
bond-data:
options:
mode: active-backup
miimon: 200
primary: hed3
provider: linux
devices:
- name: hed3
- name: hed4
network-groups:
- MANAGEMENT</emphasis></screen>
<para>
This assumes that the server uses the same physical networking layout as the
other servers in the example.
<!-- <xref/> led to a removed VSA-related section originally. -->
<!-- For details on how to modify this to match your configuration, see
<xref keyref="localizing_inputmodel/netinterfaces"/>. -->
</para>
<para>
<filename>disks_lifecycle_manager.yml</filename>: In the examples,
disk-models are provided as separate files (this is just a convention, not a
limitation) so the following should be added as a new file named
<filename>disks_lifecycle_manager.yml</filename>:
</para>
<screen>---
product:
version: 2
disk-models:
<emphasis role="bold"> - name: LIFECYCLE-MANAGER-DISKS
# Disk model to be used for &clm;s nodes
# /dev/sda_root is used as a volume group for /, /var/log and /var/crash
# sda_root is a templated value to align with whatever partition is really used
# This value is checked in os config and replaced by the partition actually used
# on sda e.g. sda1 or sda5
volume-groups:
- name: ardana-vg
physical-volumes:
- /dev/sda_root
logical-volumes:
# The policy is not to consume 100% of the space of each volume group.
# 5% should be left free for snapshots and to allow for some flexibility.
- name: root
size: 80%
fstype: ext4
mount: /
- name: crash
size: 15%
mount: /var/crash
fstype: ext4
mkfs-opts: -O large_file
consumer:
name: os</emphasis></screen>
<para>
<filename>Servers.yml</filename>: The snippet below shows the insertion of an
additional server used for hosting the &clm;. Provide the address
information here for the server you are running on, that is, the node where
you have installed the &kw-hos; ISO.
</para>
<screen> servers:
# NOTE: Addresses of servers need to be changed to match your environment.
#
# Add additional servers as required
<emphasis role="bold"> #Lifecycle-manager
- id: lifecycle-manager
ip-addr: <replaceable>YOUR IP ADDRESS HERE</replaceable>
role: LIFECYCLE-MANAGER-ROLE
server-group: RACK1
nic-mapping: HP-SL230-4PORT
mac-addr: 8c:dc:d4:b5:c9:e0
# ipmi information is not needed </emphasis>
# Controllers
- id: controller1
ip-addr: 192.168.10.3
role: CONTROLLER-ROLE</screen>
<important>
<para>
With a stand-alone deployer, the OpenStack CLI and other clients will not
be installed automatically. You need to install &ostack; clients to get the
desired &ostack; capabilities. For more information and installation
instructions, consult <xref linkend="install-openstack-clients"/>.
</para>
</important>
</section>
</section>